Author: Nataraj Sindam

  • Statsig Founder Vijaye Raji on Building a Data-Driven Platform

    Introduction

    After a decade at Microsoft and another at Facebook, where he served as VP and Head of Entertainment, Vijaye Raji took the leap from big tech executive to startup founder. In 2021, he launched Statsig, an all-in-one product development platform designed to empower teams with experimentation, feature management, and product analytics. Built on the principles he learned scaling products for billions of users, Statsig helps companies like OpenAI, Notion, and Whatnot make data-informed decisions and accelerate growth.

    In this conversation with Nataraj, Vijaye shares his journey, the tactical lessons learned in hiring and scaling, and the cultural shifts required when transitioning from a corporate giant to a lean startup. He dives deep into how modern product teams are leveraging rapid iteration and experimentation, and offers his perspective on what the future of product development looks like in an AI-first world.

    → Enjoy this conversation with Vijaye Raje, on Spotify, Apple, or YouTube.

    → Subscribe to ournewsletter and never miss an update.


    Lightly Edited Transcript

    Nataraj: As I was researching this conversation, I found out that when you were considering leaving Facebook, Mark Zuckerberg tried to convince you to stay at the company. What was that moment like?

    Vijaye Raji: This was not the first time. I tried to leave the company a couple of times before then, and every single time it was a conversation that convinced me to stay. There’s a lot to be done here, a startup is a small company, the impact you will have is not that big. For all the good reasons, I stayed back at Facebook. When it was 10 years, I knew something new had to be done for my own personal sake. I felt like I needed the startup in my life, so I left.

    Nataraj: You’ve been in big tech companies for almost two decades at that point. What was the personal motivation? There’s always a personal calculus. I’m doing all this work for a company, can I own more equity? What was your thinking at that point?

    Vijaye Raji: I started at Microsoft and spent about 10 years there as a software engineer. To be completely transparent, I loved Microsoft. I enjoyed it and learned a lot. Everything that I know about software engineering, Microsoft is the best place. This was back in the early 2000s, where you learn how to build software and predict something that’s going to happen two years down the line. It’s like a science, and there’s so much to learn from so many good people, so I had a lot of good time learning all of that stuff. At some point, I had thought about building something different. This is probably something that is very common nowadays, where you’re in a holding pattern for your green card. You can’t really leave or reset your green card, so you don’t really explore other options when you’re in that situation. I had to be like that for a little while. Once I got my green card, the first thing I did was look around, and luckily for me, Facebook was starting up an office in Seattle. That was my first jump from what I thought was a really good learning experience for a whole decade. I went into Facebook at that time, and Facebook was a startup. It was a late-stage startup, not quite ready for IPO, so I thought I was joining a very small company. Leaving behind a company that was 100,000 people to join a company that was only 1,000 or 1,200 people at that time was incredibly different and a good learning experience. I thought I was learning a lot at Microsoft, and then I went to Facebook. There was a completely new world. That’s how I went from one big company to what I thought was a startup, and then eventually, you know the story of Facebook. It grew so fast, and by the time I was there for years, it had grown to 65,000 people or something. That was a lot of good learning because when you’re in a company that is growing that fast, you learn a lot and you get exposed to a lot.

    Nataraj: By the time you left, you were leading entertainment at Facebook and also leading the Facebook Seattle office.

    Vijaye Raji: Yeah, one of the things that I generally do is every couple of years or so, I try something completely new. Even at Microsoft, I started with Microsoft TV, which was a set-top box, and then moved on to developer divisions doing Visual Studio, building compiler services. After that, I was working on SQL Server, building databases, and then Windows operating systems. Even within Microsoft, I did various little things. Then at Facebook, I started out as an engineer and worked on Messenger and some ads products, and then I worked on Marketplace, and then gaming and entertainment. Each one of them is pretty different. They don’t have much correlation or continuation, and that’s how I’ve always operated in my career. When I left, I was the head of entertainment, which included everything from videos, music, and movies, and also the head of Seattle, which when I joined was about a couple dozen people. When I left, we had about 6,500 people spread across 19 different buildings.

    Nataraj: What were some of the interesting problems that you were working on as head of entertainment, and what was the scale of those problems?

    Vijaye Raji: As Head of Entertainment, if you think about Facebook’s apps, there’s a social aspect to it—your friends and your community—and then there’s an entertainment aspect, which is you just want to spend time and be entertained. The kinds of stuff you do for entertainment could be watching videos, listening to music, watching music videos, playing games, or watching other people play games. You watch short clips from TV shows and so on. Another huge area is podcasts. Anything that is not pertaining to your social circle belongs to this entertainment category, and that was my purview. The problems we were trying to solve were about how to make the time people spend of high quality. What do they gain out of it, and how do they get high-quality entertainment? That includes everything from acquiring the right kind of content, understanding what people want, and then personalizing the content to them. It also includes removing content that is not great for the platform, anything violating policy. So you invest quite heavily in the integrity of the platform as well. On the engineering side, scale is a very important problem. When you’re delivering video at 4K, high quality, high bit rate to networks that may not be reliable, you have interesting engineering problems that you have to go solve. Those are all super exciting.

    Nataraj: Were you primarily focused on the technology of getting the entertainment on the different Facebook platforms or also part of dealing with the business side of it, like licensing and acquiring content?

    Vijaye Raji: It was part of that too. When you have a product that is observing what people watch, you know what people want. You then want to go and buy more of that content. We had a media procurement team, and you could go to them and say, this is the kind of content that people consume on Facebook, so let’s go get more of those. That plays into the decision of where the company would go invest.

    Nataraj: So you were doing some exciting stuff at Facebook at scale and then you decided it’s time for you to leave and start your own company. Did you evaluate a different set of ideas, or was the idea for Statsig brewing in your mind while you were at Facebook?

    Vijaye Raji: It’s a little bit of both. The first part of the journey was deciding to go start a company. The second part was, what do I go build? Deciding to start a company had been brewing for a long time. It was one of those things that I would regret if I didn’t do it. As for what to go build, because of my varied experience doing everything from gaming to ads to marketplaces to videos, I had lots of ideas. When you’re evaluating an idea, you want to take into account what the market size could be, what the propensity of a buyer is to pay a dollar for you, and what you are good at. Sometimes you’re going against a lot of competitors, so what are we really good at? And what could I bring that could be an advantage? Those are all the factors that go into it. If you think about it, your passion is driven by your heart, but this logical analysis is driven by your mind. If you’re entirely driven by passion, you may build something that may not be sellable. Those were the kinds of considerations that went into deciding to go build a developer platform that includes everything from decision-making and empowering everyone to make the right decisions using data.

    Nataraj: So, once you decided on this particular experimentation developer platform, how did you go about getting those first couple of customers?

    Vijaye Raji: It’s a good journey and a good lesson for everyone building startups. Usually, when you have a founder with an immense amount of faith and conviction that this is what I’m going to build, you are very mission-driven. While you’re building, you’re talking to a lot of people. This is the part where I made all kinds of mistakes. You go to someone you know who is willing to spend 30 minutes with you and say, ‘I’m going to build this developer platform, it’s going to be pretty awesome, it’s going to have all kinds of features.’ What are they going to say? Chances are, they’re going to say, ‘That sounds like a great idea, you should go do it.’ You talk to enough people, and you build this echo chamber where you are now even more convinced that everybody needs this platform. Then you go build it in a vacuum. We did this for about six months. At the end of six months, we went to the people I talked to before and asked if they were willing to start using this product. And you know what? You go talk to them and they say, ‘Let me think about it.’ ‘Let me think about it’ means they’re not really that interested. It’s much harder to have them integrate it into their existing product, and much harder to have them pay a single penny. You learn that lesson. This is one of those things where I was talking to one of my co-founders at that time, and a person said, ‘You’ve got to go read this book, The Mom Test.’ I went and read it and realized all the mistakes I was making when talking to customers. The point is, first, you need to understand what problems people are facing and if you have a solution for that problem. To even get to that stage, you need to know who your ideal customer profile is. Then you talk to them and make sure the product you’re selling actually solves the problem. Not only that, you have to be the industry best for somebody to even care about your product and then open up their wallet. Those are the kind of hard lessons that I learned over the course of the next few months.

    Nataraj: What was the value proposition of Statsig at that point, and why was it different from what already existed in the market?

    Vijaye Raji: The value proposition has not changed since the day we started. It has always been the same: the more you know about your product, the better decisions you’re going to make. What we’re doing is empowering product builders, whether you’re an engineer, data scientist, or product manager. The idea is to observe how people use your product, what they care about, and where they spend more time. All of those are important for you to know how your product is doing. Number two is what features are not working as intended. And number three is using those two insights to know what to go build next. That’s literally what we sell as the value from day one. The differentiation from existing products is that previous products were all point solutions. For feature flagging, you need a separate product. For analytics, a separate product. For experimentation, a separate product. We’re bringing them all together. The benefits are that it consolidates all data into one place, so you don’t have to fragment your data. Number two, because you’re not fragmenting data, it all becomes the source of truth. And number three, it opens up some really interesting scenarios. If you combine flagging with analytics, you can get impact analysis for every single feature. That’s something you can’t do if you have two different products.

    Nataraj: Can you explain flagging for those who might not be aware of feature flags?

    Vijaye Raji: Feature flags are ways for you to control where to launch, how to launch, and to whom. You can decouple code from features so that when you want to launch new code, you can send it to your app store and get it reviewed. Once it’s live, you can turn on features completely decoupled from code releases. It’s extremely powerful. Number one, it’s powerful to know when to launch your product. Number two, when something goes wrong in real-time, you can turn it off. There are lots of other reasons, like doing rollouts at scale. You don’t turn on a feature for 100% of users everywhere; you can do 2%, 4%, 8% just to know that all the metrics you care about are still sound.

    Nataraj: Are there any specific trends across big and small companies on how they’re approaching experimentation or product validation?

    Vijaye Raji: The trend that we are betting on is that more product development is going to be data-driven. That’s the reason why we’re here, and what we’re doing in the industry is accelerating that trend. The education we do for prospects and the industry is basically catalyzing and accelerating this. Product development used to be this siloed thing where a product manager comes up with an idea, engineers code it, testers test it, you ship it, and then you wait for two years for another release cycle. Now it’s a very iterative process, and people ship weekly, daily, and sometimes even hourly. To get to that level of speed, you need controls in place. To allow people to make distributed decisions, you need the ability to know how each line of code you wrote is performing. These tools are getting more adoption day by day, and people moving from the traditional way of development to this modern way all need it. Our bet is that AI is going to accelerate that movement because you’re going to have lots of rich software built from blocks that you’ve just assembled. You need to know if your hypothesis or your original idea actually turned out to be the product. You need these observability tools built into your product to be able to know. Generally, this trend is moving towards data-driven development.

    Nataraj: What is the right time for a company to adopt Statsig? What is the ideal customer profile for you?

    Vijaye Raji: You should start on day one. Every feature that you’re building… I remember the early days of us building software. The first thing we launched was our website, and I’m refreshing that page all the time. Whenever somebody visited the website, I was looking at them, seeing the session replay. I was literally spending all my time figuring out how people were using the product. That is an important step in the journey of your company. So start on day one. Integrate with feature flags, product analytics, session replay, and all of that stuff that will give you insights into how users are using your product. Eventually, you’ll get to the place where you want to run experiments. You don’t have to do experiments on day one. When you get there, you have the same tool with all the same historic data now capable of running experiments. There’s no early time; you just start right away.

    Nataraj: I’ve used different experimentation tools, and one of the negative side effects I see is this idea that we’ll just A/B test everything. It can lead to a little bit of incrementalism and experimentation fatigue. Do you have a take on when to use experimentation versus when to use your gut and your product sense?

    Vijaye Raji: You remember the famously misattributed quote from Henry Ford that said if I had asked people what they needed, they would have said faster horses, not a car. Experimentation is not a replacement for product intuition. You’re not going to get rid of product intuition. To make those leaps from a faster horse to a car, you cannot experiment your way there. You need people with lots of good intuition and drive to get to that kind of leap. But then once you had your first version of the car, from the Model T to where we are now, there have been thousands, if not millions, of improvements made. Those are all things you can experiment with to find better versions of what you currently have. My philosophy is that product intuition and experimentation go hand in hand. Some of these things, you have product intuition, you have conviction, you go do it. But when you’re about to launch it, just make sure that there are no side effects—things that you have not thought about. Products have gotten so complex nowadays, I don’t think anybody out there can meaningfully understand ChatGPT in its entirety. When you’re in that kind of situation, it’s only going to get harder for any one person to fully grok your product. In those cases, observability, instrumentation, and analytics are extremely valuable. Then you have experiments. They don’t have to be just testing two different variants. It could literally be, ‘I believe in this feature. I’m going to launch this feature.’ That is a hypothesis. Validate it. Put it out there and measure how much it’s actually improved in all of those dimensions that you thought about.

    Nataraj: Let’s talk a little bit about growth. You started in 2021. Can you give a sense of the scale of the company right now?

    Vijaye Raji: We have about a couple thousand users, most of them come self-serve. They pick up our product, we have all kinds of open-source SDKs. We have a few hundred enterprise customers that are using our product at scale. And we have massive scale in terms of data. If you think about the few hundred enterprise customers, they all have big data. We process about four petabytes of data every day, which is mind-boggling. Last September, we announced we were processing about a trillion events every single day. Now we’re processing about 1.6 trillion events every single day. To put that in perspective, that’s about 18 million events every second. That’s what our system is processing. That’s been our growth in terms of customers, scale, and infrastructure.

    Nataraj: How are you positioning Statsig? Is it primarily a developer tool, and you’re using that to drive enterprise growth?

    Vijaye Raji: We position ourselves as a product development platform. It caters to engineers, product managers, marketers, and data scientists. There are parts of the product for each individual. If someone comes to us to solve an experimentation problem, it’s usually a data scientist team. But once we’re in that company, our product can grow organically. We don’t charge for seats. The engineering team will adopt Statsig for feature flagging, and the product team will adopt it for analytics and visualization. This organic growth happens within a company, and this is how we have grown even within our existing customer base.

    Nataraj: Where do you spend most of your marketing efforts for the highest ROI?

    Vijaye Raji: There are different outcomes for our marketing efforts. Some are direct response, where we feed people into our website for self-serve sign-ups or talking to our sales team. We track that, and it’s a very seasonal thing. Then there are aspects of marketing that are more brand-related. We want to be out there and build brand awareness. We invest in things like that. One of the fun things we did last year was a billboard in San Francisco. That was really cool because we got a lot of brand awareness from that. We also partner with people like you and some podcasts that we work with who have great reach and brand awareness.

    Nataraj: In terms of culture, you mentioned you’re a completely in-person company. Why does that matter?

    Vijaye Raji: Our product teams are all in-person. Our sales team is spread out in the US, with some in the Bay Area, some in New York, and a couple of people in London. But everyone else—engineering, product, data science, design, marketing—we’re all in-person in Bellevue. It started out with eight of us on day one. There were so many decisions we had to make, all this whiteboarding. It would be very hard to do all of that over Zoom. We naturally gravitated towards doing this in person, and I saw how fast we were able to move by not having to document all the decisions. The clock speed was extremely high, and I was reluctant to ever lose that. It’s a trade-off. We’ve had so many really good people we had to say no to because they were not willing to do this in person. That’s a painful trade-off. But four years in, we’re still in person. We’re about 100 people showing up five days a week, and it’s a self-selection mechanism. There are a lot of positives. We’ve built a really good community, and I like it and want to keep going as long as I can.

    Nataraj: You were in Seattle in 2021 when it was hard to hire talent. How were you figuring out how to hire great people?

    Vijaye Raji: That’s a very good question. You want to hire great people and retain great people. After managing large orgs, the realization I came to is that it’s not the intensity that matters, but what proportion of the time you spend doing things you don’t enjoy. If you’re doing intense work but you love what you’re doing, you have autonomy, and no overhead or friction, people love that. They come in excited, leave exhausted, but they are gratified. As long as you can provide that environment, the intensity can be high, and you don’t have to worry about burnout. To me, it’s always been about how I can remove friction, overhead, and process. These are creative people; I want them to be in their element. Can I provide the best working environment for them? In a startup, if you’re doing a 10-to-5 job, it’s not going to work. People that come into Statsig are already self-selecting into our culture. We’re not doing anything crazy like six days a week, but we are a hardworking group of people, and I like to keep the talent density extremely high because it affects the people that are already here.

    Nataraj: Let’s talk a little bit about AI. How are AI companies using Statsig?

    Vijaye Raji: Absolutely. If you’re a consumer of LLMs, we have an SDK that you can use to consume these APIs, where we will automatically track your latency, your token length, and how your product is doing. We tie it all back to the combination of the prompt, the model, and the input parameters you used, quantifying the lift or regression. We also have prompt experiments in the product. There will be a lot of people building different kinds of prompts and wanting to validate how each one is performing. We have a very specialized experimentation product just for prompt experiments. The rest of it is just a very powerful platform you can use for anything. If you’re OpenAI running experiments on ChatGPT, that’s going through Statsig. Or if you’re Notion, a consumer of AI and LLMs, you pass it through Statsig. Statsig powers you to determine which models work, which combination of all the parameters yields better results. Then there’s how Statsig uses AI to make our customers’ experience better. There’s a lot we’re doing there that I’ll be announcing in the next few months.

    Nataraj: In terms of Statsig, do you have a favorite failure or a deal that fell through that changed things?

    Vijaye Raji: A lot. But one thing I want to call out is the humbling experience when you realize you will never be the first one to come up with the best ideas. Part of it is learning to acknowledge that’s a good idea, give them credit, and then quickly follow on. If it is a great idea and you believe it, without any ego, just go and build it. If you can build it better than anybody else, then you continue to live on for a couple more days. We famously didn’t take data warehouses seriously in the first year or so. We built everything on the cloud without really taking into consideration warehouses like Snowflake or Databricks. Then we started seeing customers come in with things like, ‘Hey, I have Snowflake. Could you operate on that?’ And we would be like, ‘Well, you can ingest data from there, but I can’t operate on top of it.’ You start to believe in your current products. Then you realize they start to leave, saying Statsig is not the right product for us. After a couple of those humbling moments, you realize your position is not right. So we spent the next three to four months building a warehouse-native product, and then we came back to the industry and started selling our product. That was a very good failure, realization, and then action from the realization.

    Nataraj: What are you consuming right now? It can be anything—books, movies, or TV series.

    Vijaye Raji: I’m a big Audible guy, so I listen to books on my way to work and back. Right now, it’s Brian Greene’s ‘The Elegant Universe.’ I think this is the second time I’m reading it. I just wanted to listen to it all over again, and every time I feel like I catch on to something new from that book.

    Nataraj: What do you know about founding a startup that you wish you knew when you started?

    Vijaye Raji: Thousands of things. I wish I had spent more time with my sales and marketing friends back at Facebook. They’re all good friends, and I’m still in touch with all of them. We used to sit in meetings every week, but I never once thought to drill down deeper. How is your team structured, how are they incentivized, what kind of commissions do they get, how do you think about the different types of marketing? I wish I had learned all of that stuff so I could have saved a lot of failures in the early days.

    Nataraj: For you, I have a special question: what is a big company perk that you miss?

    Vijaye Raji: The recruiting team.

    Nataraj: What is it that you don’t miss?

    Vijaye Raji: A lot of things. In a big company, you’re sitting in review after review after review. Those are not just product reviews; you’re looking at privacy reviews and security reviews, things that are important but end up being overhead. At startups, you can move extremely fast by bypassing a lot of that, or even if you have to take care of them, they are much smaller deals.


    Conclusion

    Vijaye Raji’s journey from scaling products at tech giants to building Statsig from the ground up offers a masterclass in modern product development. His insights underscore the power of combining deep product intuition with rigorous, data-driven validation to build products that customers love. For any founder or product leader, this conversation is a valuable guide to navigating the complexities of hiring, scaling, and maintaining velocity.

    → If you enjoyed this conversation with Vijaye Raje, listen to the full episode here on Spotify, Apple, or YouTube.

    → Subscribe to ourNewsletter and never miss an update.

  • Molham Aref on Building RelationalAI: An AI Coprocessor for Snowflake

    In this episode of The Startup Project, host Nataraj sits down with Molham Aref, CEO of RelationalAI. With over three decades of experience in enterprise AI and machine learning, Molham shares his journey from pioneering neural networks in the ’90s to founding his latest venture. RelationalAI is tackling a fundamental challenge for modern enterprises: the sheer complexity of building intelligent applications. By creating an AI coprocessor for data clouds like Snowflake, RelationalAI unifies disparate analytics stacks—from predictive and prescriptive analytics to rule-based reasoning and graph analytics—into a single, cohesive platform. Molham discusses the evolution of the modern data stack, the practical applications of GenAI in the enterprise, and offers hard-won advice on founder-led sales for B2B startups. This conversation is a masterclass in building a deep-tech company that solves real-world business problems.

    → Enjoy this conversation with Molham Aref, on Spotify, Apple, or YouTube.
    → Subscribe to ournewsletter and never miss an update.

    Nataraj: My guest today is Molham, the CEO of Relational AI. He was the CEO of Logicbox and PredictX before this. Relational AI recently raised a Series B of $75 million from Tiger Global, Madrona, and Menlo Ventures. They’re widely known for solving use cases like rule-based reasoning and predictive analysis for large enterprise customers. So this episode will try to focus on what the modern data stack looks like, what applications can be built, and a lot more interesting things around those topics. With that, Molham, welcome to the show.

    Molham Aref: Thanks, Nataraj. Pleasure to be here. Looking forward to the discussion.

    Nataraj: I think a good place to start would be to talk a little bit about your career and all the things that you’ve done before this. How did you end up with RelationalAI?

    Molham Aref: Great. I have been doing machine learning and AI-type stuff in the enterprise under various labels since the early 90s, so it’s over 30 years now. I started out working on computer vision problems at AT&T as a young engineer coming out of Georgia Tech and worked there for a couple of years. Then I joined a company that was commercializing neural networks, a company called HNC Software, and worked on some of the early neural network systems, particularly in the area of credit card fraud detection. I focused on retail, supply chain, and demand prediction. I was very fortunate to have a wonderful experience there learning about the technology and all the things you have to do when you put together what today we would call an intelligent application. We had a very nice IPO in 1995. We bought a small company called Retek that was working specifically in the retail industry and learned a lot from that experience. We grew Retek quite substantially and spun it out in another IPO in 1999. You start thinking, this is easy. Anyone can do this.

    So in the early 2000s, I started getting involved in startups myself. One was also trying to apply computer vision technology to a brick-and-mortar environment, a company called Brickstream. Unfortunately, Brickstream was a good idea too early. I learned a little bit about how too early is indistinguishable from wrong in the startup game.

    Nataraj: Was it similar to what Amazon Go stores was like? What were you trying to do?

    Molham Aref: Not nearly that sophisticated, but yes, the idea is that you can put stereo cameras in the ceiling of a retail environment—a retail bank, a retail store—and start collecting information about what consumers are doing, where they’re dwelling, what products they look at, within certain error tolerances, and then connecting the whole customer journey because you would get handed off from camera to camera as you walk through the store. Anonymously, we didn’t know who people were; we were just looking at the top of their heads. You build a picture of what the brick-and-mortar experience is like. At the time, this was before deep learning, and everything was handcrafted computer vision algorithms. It worked, but it was expensive. A lot of the challenge was what do you do with that data? So we weren’t just solving a problem around computer vision; we had to justify the data. It was a good experience, but not a good commercial outcome.

    Then I helped start a company in the wireless network space where we built network simulation optimization systems for AT&T, Cingular, and American Mobile, a bunch of wireless operators. We helped them migrate from the 2G systems they were on to 2.5G and 3G systems and helped them manage infrastructure and spectrum and a whole bunch of other stuff that you couldn’t deploy without the benefit of very sophisticated intelligence.

    Nataraj: How did you go from machine learning and vision to wireless networks? How did that idea come up?

    Molham Aref: At the core, a lot of what we do in machine learning and AI is about modeling. We have deployed a handful of modeling techniques: simulation, machine learning, GenAI more recently, rule-based reasoning, mathematical optimization, things like integer programming, and graph analytics. The AI toolbox has half a dozen tools that you deploy in a variety of contexts, and it’s perfectly normal for the domain to vary. Whether you’re modeling a wireless network or a retail supply chain, there are entities that are involved. In a retail supply chain, you might have raw material in the form of fabric. In the wireless industry, you have raw material in the form of spectrum. In the retail industry, you might change that fabric through manufacturing to make it into a t-shirt. In the wireless industry, you take that raw spectrum and, using Ericsson and Nokia equipment, you convert it into a wireless minute or a wireless kilobyte. Then you manage supply and demand accordingly. So in both contexts, you have to predict how many customers are going to want this wireless minute or this t-shirt. You try not to make too much of your product because if you make too much, it’s perishable. It loses value.

    I’m not a wireless expert, but you learn enough about the core concepts in that domain. That company did reasonably well and was acquired by Ericsson. After that, I went back into retail and helped build one of the first companies that built retail solutions on the cloud. We went all in on the cloud in 2008-2009, when that wasn’t an obvious choice, and leveraged the cloud to do a new class of machine learning. My whole career was spent working at companies focused on building one or two intelligent applications, and in every situation, it was a mess. You had to combine different technology stacks: the operational stack, with the BI stack, with the planning stack, the prescriptive analytics stack, with the predictive analytics stack. You end up spending a lot of time and energy just gluing it all together. That’s fundamentally the reason building intelligent applications is hard. I thought it would be cool to build a generic technology on a popular platform to make it so that you don’t need so many components and so much duct tape.

    Nataraj: It would help for the listeners if you can talk a little bit more about what predictive analytics, prescriptive analytics, or rule-based reasoning mean.

    Molham Aref: Broadly speaking, you have descriptive analytics. It answers the question of what happened in my business. Business intelligence is a form of that. You’re looking backward and saying, what were the sales of flat-screen televisions in Boston last year? You can aggregate the data by region, by different time granularity, by different types of products. If you just have descriptive analytics, it’s up to the human to look at that and then project forward as to what you’ll sell in January in Boston or Philadelphia. There’s a ton of data to look at, and you can improve that with a variety of modeling techniques, everything from time series forecasting to today’s graph neural networks to help you understand what drives demand. If you can now predict what the demand is going to be in January or February next year under various promotional and pricing scenarios, you can now leverage prescriptive analytic technology. Descriptive, predictive. Prescriptive is, I know the relationship between, say, price and demand. What should I set the price to to maximize revenue or profit or market share? The technology you use for each of these tasks is different. GenAI, of course, is a very powerful new technique that we have in our toolbox. But at its core, it’s predictive because it’s trying to predict the next word in a sentence.

    Nataraj: You figured out it’s very hard to deploy these solutions and you started RelationalAI in 2017, before GenAI. What were the primary use cases and types of customers you were targeting at that point, and how did it evolve?

    Molham Aref: My area of expertise and our team’s area of expertise is in the enterprise, deploying all these different techniques, including rule-based reasoning, which is symbolic reasoning. The idea was to take all of these techniques because for any hard problem in the enterprise, you can deploy all of them to solve it. We help build applications today that have elements of GenAI, graph neural networks, integer programming, graph analytics, and rule-based reasoning. I strongly believe that the combination is what wins. My view is AI and machine learning, in particular, drive us towards data-centricity because the datasets involved are big. The old architectures that use the database in a dumb way and then pull data out to put in a Java program stop working when you have to pull a terabyte of data out. We wanted to move all the semantics, all the business logic, all of the model of your business as close to the data as possible. We built a system that we take to market as an extension to data clouds like Snowflake. We call it a co-processor. We plug in inside Snowflake and build a relational knowledge graph that facilitates queries that do graph analytics, rule-based reasoning, predictive analytics, and prescriptive analytics. It’s all in one place: your data, your SQL, your Python, and all of these capabilities. We see a 10 to 20x reduction in complexity and code size.

    Nataraj: What forced you to build on top of, or as you call it, a co-processor on Snowflake? There are other platforms like Databricks. What pushed the edge in Snowflake’s direction?

    Molham Aref: This is a very important decision. In the 90s, I was at a company that picked Oracle and Unix as a platform. We were competing against companies that picked Informix or some other relational database. If you don’t get the platform decision right, you can jeopardize your go-to-market motion. From my perspective, we talked to a lot of enterprises, and what we see in practice in the Fortune 500 and the Global 2000 is for SQL and data management, Snowflake is by far the leader. We see them first. We see BigQuery a distant second. Databricks, until recently, didn’t have a SQL offering. They’re everywhere with Spark, but we still don’t see them that much for SQL. For us, it was a really obvious choice to build on Snowflake because they’ve got the traction. Now, there’s a lot of competition, and we’ll see how it all evolves, but my bet is still on Snowflake.

    Nataraj: Can you tell us a little bit about your journey of finding your first five customers and what it took to get them?

    Molham Aref: It gets progressively easier as you work in the B2B space more. When I was earlier in my career, I didn’t source the customer at all. At HNC, we were selling neural networks. You go to a bank and say, ‘Buy my neural networks.’ The bank goes, ‘What’s a neural network and why would I buy it?’ At some point, they realized that wasn’t effective. It was better to go to a bank and tell them we’re solving a problem they have in their language. If you say, ‘You’re losing $500 million a year in credit card fraud, and if you use our system, you’ll only be losing $200 million,’ any banker’s going to understand that. Then it just becomes a matter of proving you can create those savings. I learned the importance of learning the language of the industry I’m selling into. The folks that are most effective in sales are not the slick talkers; it’s the people who can bring content to a conversation so the prospect doesn’t feel they’re wasting their time. Fortunately, after being at it for a while, you get a multiplicative effect. We had some early customers at Relational AI that used to be customers of mine 15 or 20 years ago at a different startup. Because of the good work we did for them then, there was a level of trust.

    Nataraj: Which field did you pick initially and what was the value that you were offering them?

    Molham Aref: Starting from graduating college, I liked computer vision and stumbled into an internship at AT&T. Then when I went to HNC, it was because I was interested in neural networks, not particularly in industries. The group I was attached to was selling into retail, manufacturing, and CPG. So you start learning about forecasting problems, supply chain, and merchandising. You learn the language of the industry that way. You have to do the hard work of seeing it from the eyes of your customer.

    Nataraj: Right now, what type of customers are you mainly catering to? Is there a sweet spot?

    Molham Aref: RelationalAI is more of a horizontal infrastructure play. We are a co-processor to Snowflake. Instead of going to the line of business executive, we’re targeting the Snowflake customer. There’s usually a CTO, CIO, or someone senior who understands infrastructure and data management. To that person, we seem like magic. They spent the last two or three years moving all their data into Snowflake. The last thing they want to do is take that data and pull it back out from Snowflake to put it in a point solution for graphs, rules, predictive, or prescriptive. We come along and say, keep it all there. We plug into that environment. We run inside the security perimeter, same governance. You don’t have to worry about data synchronization because our knowledge graph is just a set of views on the underlying Snowflake tables. We’re relational, which is the same paradigm as Snowflake. So you get something that feels like magic.

    Nataraj: When you’re building on top of Snowflake, how do you think about competing or getting cannibalized by Snowflake itself?

    Molham Aref: It’s not a new phenomenon. It used to be hardware was the platform. Then operating systems came along. Then Oracle came along. There’s always this tension between the platform and the thing running on it. If the thing running on the platform starts to generate a lot of value, the platform provider can try to make it a feature. You see this all the time. You have to be really good and have a substantial enough solution where it’s either very difficult or very expensive to copy. Look at vector databases. Very hot for about six months, but now it’s a feature in everything. With us, our technology has deep moats. We have a new class of join algorithms, new classes of query optimization, new relational ways of expressing certain things. It creates deep enough moats where I think everyone will have an easier time working with us than trying to compete with us, at least the platform providers.

    Nataraj: Can you talk a little bit more about this concept of the modern data stack and where RelationalAI fits into it?

    Molham Aref: The modern data stack is a term that came into existence about 10 years ago. It was about the unbundling of data management. There are two ways to make money in our industry: bundling and unbundling. The modern data stack was basically a term used to describe an unbundling of data management so that you could pick different things. You can pick your cloud vendor, your data management platform, your semantic technology, your BI technology. They weren’t coupled together. From that, you had certain things emerge, like Snowflake, DBT, and Looker. It continues now with Open Table Formats and Open Catalogs. I think the next big fight is going to be around semantics and business logic.

    Nataraj: What do you mean by business logic and semantics?

    Molham Aref: It’s like DBT makes it possible to express semantics in SQL in a way that you can then pick whatever SQL technology you want to run it on. With business logic, you’re kind of tied into certain stacks. A lot of the business logic people write is in procedural programming languages that are not open. If you can capture the semantics of your business logic in a generic, declarative way, then you can map that to whatever is popular that day. A lot of energy is spent managing accidental concerns, not fundamental concerns. If you had your semantics defined in a way where they were not platform dependent, then whatever replaces cloud computing, you would just target that. You separate the business logic from the underlying tech.

    Nataraj: As someone who saw machine learning and AI evolve, how do you see the current GenAI hype cycle? What are you excited about?

    Molham Aref: GenAI is super exciting. For the first time, we have models that can be trained in general and then have general applicability. Up until GenAI, you built models for specific problems. Now you have models that just learn about the world. I think we are a little bit past the peak of the hype cycle. In the enterprise, what people are finding out is having a model trained about the world doesn’t mean that it knows about your business. What I see happening now is a bit more sobriety and the realization that to have GenAI impact, I need to be able to describe my business to the GenAI model. It doesn’t come with an understanding of my business a priori. We’re starting to see a lot of appreciation for combining that kind of technology with more traditional AI technology like ontologies and symbolic definitions of an enterprise.

    Nataraj: Are you leveraging GenAI for your own company? If so, in what ways?

    Molham Aref: We don’t build models; we’re not an Anthropic or an OpenAI. Our core competency is how you combine a language model with a knowledge graph to answer questions that can’t be answered otherwise. We’ve been doing work with some of our customers to show how much more powerful GenAI can be if it’s combined with a knowledge graph. Internally, all our developers have the option of using coding copilots. We are exploring some new companies that will make all our internal information searchable via a natural language interface. But we’re still a relatively small company.

    Nataraj: You emphasized how sales is perceived in B2B. Can you talk more about your framework for approaching B2B sales?

    Molham Aref: I think it’s a mistake for the founders of the company not to take direct responsibility for sales. You have to go out there and do the really hard work of customer engagement and embarrassing yourself to see what really works, what really resonates, and where the pain is. Trying to hire a salesperson to do that for you early on is a huge mistake. Once you’ve figured out what works, now you have the challenge of simplifying it and establishing proof points so it becomes easier for someone who is not as close to the problem or technology to come in and sell it. But even then, you want that person to be able to have a content-rich conversation with a buyer. People are worried about their jobs, their careers, their companies. They want to spend time with people who can really help them.

    Nataraj: Where do you see RelationalAI going next?

    Molham Aref: We just launched our public preview last June. It’s been amazing, all the inbound customer interest from the Snowflake ecosystem. We have a GA announcement coming out next week. There’s just so much alignment between us, our customers, and our partner Snowflake, that I think we will spend a lot of energy in the next two or three years building a very sizable business there. Beyond that, we will see. I do think intelligent applications represent a great opportunity because they’re still hard to build. If the world starts to appreciate the value of this data-centric, knowledge-graph-based way of building applications, I think we will enjoy serving the market as it figures this out.

    Nataraj: What do you consume that forces you to think better?

    Molham Aref: I really enjoy reading about history and listening to various historians characterize history at many different scales. There’s a lot to learn from history. It does repeat itself. There’s so much to learn in terms of human beings, our behavior, how we organize, how we get excited about pursuing certain ideas, and how ideas emerge and die. I think there are analogs of that in the enterprise because our job is to motivate a group of people around a mission to go do something great.

    Nataraj: Who are the mentors who helped you in your career?

    Molham Aref: Many people have been kind and generous. I’ll call out two people. One is Cam Lanier. He was just an amazing guy who passed away earlier this year. He was a great role model of someone who became very successful in business because if he shook hands with you on something, you could take that to the bank. He understood how integrity and trust really drive profit. I’m forever indebted to him for his mentorship. Another person is Bob Muglia. I met Bob after moving to Silicon Valley. He and I connected very much on what we do at RelationalAI. He’s studied the history of the relational movement and how it became dominant. Bob is just an amazing product person and an amazing human being.

    Nataraj: What do you know about being a founder or CEO that you wish you knew when you were starting?

    Molham Aref: It’s hard. It’s very difficult. This will probably be the last time I do this. I’ve been very fortunate to be part of some very successful ventures. I couldn’t not do this because I’m on this mission to make this kind of software development possible, and I work with some amazing people. But this stuff ages you. It’s really difficult, and you have to be ready for a lot of difficult times. Also, working well with people. A lot of the challenges you have are with people dynamics, creating an environment where you can have a diversity of expertise and skills and have people work together and appreciate each other. That’s super challenging.

    Nataraj: Well, that’s a good note to end the podcast on. Thanks, Molham, for coming on the show and sharing your amazing insights.

    Molham Aref: Thanks. Thanks for having me.

    This deep dive with Molham Aref highlights the shift towards data-centric architectures and the immense opportunity in simplifying complex enterprise workflows. His insights provide a clear roadmap for leveraging modern data platforms to build truly intelligent applications.

    → If you enjoyed this conversation with Molham Aref, listen to the full episode here on Spotify, Apple, or YouTube.
    → Subscribe to ourNewsletter and never miss an update.

  • Read.ai’s Growth to $50M: Founder David Shim on Building an AI Co-Pilot

    David Shim is no stranger to the startup world. A repeat founder, former Foursquare CEO, and active investor, he has a deep understanding of what it takes to build and scale a successful tech company. His latest venture, Read.ai, is on a mission to become the ultimate AI co-pilot for every professional, everywhere. Starting as an AI meeting summarizer, Read.ai has rapidly evolved, leveraging unique sentiment and engagement analysis to deliver smarter, more insightful meeting outcomes. The company’s product-led growth has been explosive, attracting over 25,000 new users daily without a dollar spent on media and recently securing a $50 million Series B funding round.

    In this conversation, David shares the origin story of Read.ai, detailing how a moment of distraction in a Zoom call sparked the idea. He explains their unique technological approach, which combines video, audio, and metadata to create a richer understanding of meetings than traditional transcription. David also dives into his philosophy on building for a horizontal market, the future of AI agents, and his journey as a founder and investor.

    → Enjoy this conversation with David Shim, on Spotify or Apple.

    → Subscribe to ournewsletter and never miss an update.

    Nataraj: You’ve worked and created companies before and after COVID, and you mentioned a lot of your team is remote. Do you have a take on whether remote or hybrid work is better? What is your current sense of what is working best for you when running a company?

    David Shim: I’d say hybrid is the future; it’s what works. Where I would say hybrid doesn’t work as well is if you’re really early in your career. It is harder to build those relationships and get that level of mentorship on a fully remote basis. It’s not to say that it’s impossible, but it is a lot more difficult. When you’re in an office, you have that serendipity of meeting people and building connections. When you’re fully remote, especially early in your career, you don’t know who to ask beyond your manager and your immediate team.

    That said, once you’re more senior, I think it becomes easier to be fully remote. You know what to do, who to talk with, and you’re not afraid to break down walls. I think Reed’s is the same way. We’ve got a third of our team fully remote, and then people come into the office Tuesday, Wednesdays, and Thursdays. We let people come in on Mondays and Fridays, but they don’t have to. What’s really happening is people actually like that level of interaction, so they’re coming in without being required.

    Nataraj: Let’s get right into Read.ai. Great name, great domain name. Talk to me a little bit about how the company started. What was the original idea?

    David Shim: The original idea started when I was in a meeting. After I’d left Foursquare as CEO, I had a lot of time on my hands. It was still peak COVID, so no one was meeting in person. I was doing a lot of calls, either giving advice or considering investments. What I started to realize was within two or three minutes of a call, you know if you should be there or not. I’d think, ‘I should not have been invited to this meeting. Why am I here?’ But you can’t just leave. So, like most people, I’d surf the web or answer emails.

    One time, I noticed a color on my screen that matched someone else’s screen. I looked closer and saw a reflection in their glasses. It was the same image I could see on ESPN.com. That triggered an idea: can you use not just audio, but video to understand sentiment and engagement? Can I determine if someone is engaged on a call? It wasn’t about being ‘big brother,’ but about identifying wasted time. In a large company, you can invite 12 people to a meeting, and they’ll all accept, potentially wasting 12 hours if they didn’t need to be there. So, the idea started to form around using this data to optimize productivity.

    Nataraj: So were you analyzing video and text at that point, or just text?

    David Shim: Video and text. Transcription companies already existed, as well as platforms like Zoom and Microsoft that had transcription built-in. I didn’t want to build something that everybody else already had and just make it a little bit better. I wanted something that was a step-function change. So we said, let’s take the existing transcripts but apply sentiment and engagement to them. Think about it: David said this, but Nataraj responded this way. That narration piece is missing. Our AI can go in and say, ‘This is how the person reacted to the words.’ Now, an LLM not only has the quotes that were said but how individual people reacted. It could say, ‘The CEO was really skeptical based on his facial expressions and became disinterested 15 minutes into the call.’ You can’t pick that up from quotes, but you can from visual cues.

    Nataraj: What’s the main value that the customers got from Read.ai at that point?

    David Shim: In 2022, we launched the product with real-time analytics, showing scores for sentiment and engagement. People found it interesting and valuable. But what was missing was stickiness. People would say, ‘You’re telling me this call is going really bad, but what do I do?’ You’re not giving me advice. That’s when the larger language models came out at scale in late 2022. We tested them and wondered if our unique narration layer, when applied to the text of the conversation, would create a materially different summary. The answer was yes. Comparing a summary with our narration layer to one without, it was totally different. You want to put what everyone is paying attention to at the top of the summary, and getting that reaction layer really changed the quality of a meeting summary.

    We started 2023 as the number 20 meeting note-taker in the world. Now we’re number two, and we’re within shooting distance of number one. To go from number 20 to number two in less than 18 months highlights that there’s a difference in our approach, methodology, and the quality of the product.

    Nataraj: And how did you acquire your customers in these 18 months? Was it inbound, outbound? Did you target a certain segment?

    David Shim: Many VCs say to pick a specific niche and go vertical. My take was that this is such a big market. If this is a seminal moment where everyone’s going to require an AI assistant, it means everyone from an engineer at Google to a teacher to an auto mechanic will need it. So we went horizontal versus vertical. That has helped a lot from a product-led growth motion. We’re adding over 25,000 to 30,000 net new users every single day without spending a dollar on media. It’s pure word of mouth. If you see the product, people will use it, talk about it, and share it.

    Nataraj: Is that because if you’re on a meeting with someone, the other people see it being used and then they buy it? The product inherently has that viral aspect, right?

    David Shim: 100%. Meetings are natively multiplayer. The problem now is, ‘How do I get access to those reports?’ We’ve made it really simple for the owner to share it just by typing in an email address, pushing it to Jira, Confluence, or Notion. We’re not trying to be a platform where everyone has to consume the data. This is where ‘co-pilot everywhere’ comes into play. We want to push it wherever you work. You can see the data on a Confluence page or a Salesforce opportunity that has better notes than the seller ever created. At the bottom, it says, ‘Generated by Read,’ and you wonder, ‘What is this thing?’ That bottoms-up motion has driven a lot of our growth.

    Nataraj: I can almost see this becoming an ever-present co-pilot in a work setting that will change productivity for knowledge workers. Is that the vision where you’re going?

    David Shim: That’s exactly what we’re thinking from a ‘co-pilot everywhere’ perspective. When you think about the current state, it’s about pushing data to different platforms. But you also need to pull data down. For example, Read doesn’t treat your first meeting on a topic and your tenth meeting as silos; it aggregates them to give you a status report on how a topic is progressing. Three months ago, we introduced readouts that include emails and Slack messages. Now it’s truly a co-pilot everywhere, not just for your meeting. The adoption becomes incredible because you don’t have to log into Gmail, Salesforce, and Zoom separately. It’s just right there.

    Nataraj: You’re still thinking breadth-first, or are you now targeting what a Fortune 500 company wants versus an SMB?

    David Shim: We’re still horizontal, but we’re picking specific verticals based on customer demand, like sales, engineering, product management, and recruiting. That’s why we did integrations with Notion, Jira, Confluence, and Slack. Another way to look at ‘co-pilot everywhere’ is agents. Everyone’s talking about AI agents, but in reality, you want your Jira to talk with your Notion, to talk with your Microsoft, to talk with your Google. That’s the push and pull of data between integrations. I think that is going to be the next big space.

    Nataraj: What about the foundation models you’re using? Are you held in control by their pricing?

    David Shim: We are not held in control. Last month, 90% of our processing was our own proprietary models. We use large language models for the last mile—taking the interesting topics and reactions we found and putting them into a readable sentence or paragraph. But we’re building our own models that cluster groups of data together, identify the subject matter, and then we go to the LLMs and say, ‘Summarize this for us.’ 90% of our processing cost is our own internal models. We have five issued patents now with more pending. We’re not just a wrapper layer with good prompts; that’s not a defensible moat.

    Nataraj: What do you think about the whole trend of agents? Are we seeing any real use cases?

    David Shim: I think it’s early. It’s the same way with voice. Voice is interesting, but if you look at Alexa or Siri, they had massive early scale and then kind of dropped off. It’s an important play, but it’s a feature, not the whole product. With agents, it’s the same thing. It’s not that simple to say, ‘Go search for flights and find me the best one.’ You need to know what to ask for. What dates? Are you using miles? An agent in theory could do that, but you still need to upload the training data. I think the agents working in the background are more likely to succeed, where someone has trained data on how to handle specific use cases. But for the consumer, they’re not going to know what an agent is, just like most consumers don’t know what an API is.

    Nataraj: What is the vision for Read.ai for the next couple of years?

    David Shim: In the next one to two years, it’s diving further into ‘co-pilot everywhere.’ We’re adding more native integrations with both push and pull capabilities. Where we want to get to is optimization. I’ve got your emails, messages, and meetings. If you’re a seller, as you have each call, I can go into Salesforce and update the probability of a deal from 25% to 50%, then 75%. We can push a draft to the seller saying, ‘We think this opportunity should go to 75%,’ and include the quote from the meeting that justifies it. Now, what was the most hated function for a seller—updating Salesforce—becomes an automated process where they just swipe right or left. That’s the level of optimization people will ask for.

    Nataraj: I want to slightly shift gears and talk about your investing. What’s your general thesis?

    David Shim: On the venture side, my thesis is if you believe in me enough to invest in my company, I should have the same belief to invest in your VC fund. If you’re a portfolio company, they’ll often give you access for a lower amount. I think every founder should take advantage of that. When I do angel investing, it’s one of two things. One, anyone I know that I’ve worked with before who asks me to invest, I’m more likely to say yes. It’s about giving back that same level of trust. The second is for more interesting opportunities that come up on my radar where I feel they have something novel that can deliver outsized returns.

    Nataraj: What do you know about being a founder that you wish you knew when you were starting?

    David Shim: At my first company, Placed, I was a solo founder. That is very expensive on your time, stress level, and relationships. You have nobody else to go to. I would say, don’t force it. If you can find co-founders that you trust and work with really well, do it. With Read.ai, my co-founders Elliot and Rob have been incredible. It distributes the work, stress, and knowledge. When you have three really smart people coming back together with different ideas, you can ideate better. So for any founders out there, if the opportunity exists, go with a co-founder versus solo.

    From an investor standpoint, outside of your own startup, don’t over-index on anything. Whatever is hot will stay hot for a little bit, but it will almost always drop off. Be careful about over-indexing. A lot of times, just put it in an index fund. The S&P 500 is up 25%—that’s better than most VC IRR on a yearly basis, and it’s liquid.

    This conversation offers a masterclass in building a modern AI company, highlighting the importance of a unique technological moat, a powerful product-led growth engine, and a clear vision for the future. David’s journey provides valuable lessons for any founder navigating the AI landscape.

    → If you enjoyed this conversation with David Shim, listen to the full episode here on Spotify or Apple.

    → Subscribe to ourNewsletter and never miss an update.

  • How Decagon Built Human-Level AI Support: Ashwin Sreenivas on customer obsession, early traction, enterprise complexity, and the AI concierge future

    How Decagon Built Human-Level AI Support: Ashwin Sreenivas on customer obsession, early traction, enterprise complexity, and the AI concierge future

    Unlock the secrets to Decagon AI’s $1.5 billion valuation and AI-powered customer support.

    Ashwin Sreenivas is the co-founder of Decagon AI, a company revolutionizing enterprise customer support with AI agents. Founded in 2023, Decagon has rapidly grown to a $1.5 billion valuation, automating support workflows for brands like Duolingo and Notion. Ashwin, previously co-founder of Helio (acquired by Scale AI), shares insights into Decagon’s product-market fit, secret sauce, and tangible business impact, revealing how AI is transforming customer interaction. If you’re curious about the future of AI in enterprise solutions, this episode is a must-listen.

    Listen now YouTube | Apple | Spotify

    Quotes from the episode

    • Traditional chatbots relied on rigid decision trees, leading to frustrating customer experiences, but Decagon’s AI agents are trained like humans, enabling fluid, natural conversations.
    • Decagon’s AI agents follow Agent Operating Procedures (AOPs), which are similar to human SOPs, and this allows them to handle customer interactions across chat, phone, SMS, and email.
    • The key is to focus on building AI agents that can follow instructions effectively, allowing businesses to offer personalized customer concierge services and seamless user experiences.
    • Instead of predicting what customers want, AI should learn customer preferences and remember them, making interactions more seamless and efficient, enhancing overall satisfaction.

    What you’ll learn

    • Understand how Decagon AI is transforming customer support by using AI agents that can handle conversations across various channels.
    • Learn about Agent Operating Procedures (AOPs) and how they enable AI agents to follow instructions and interact with customers like humans.
    • Discover how Decagon AI helps businesses expand their support offerings, leading to higher retention and happier customers through increased support access.
    • Explore the importance of solving customer problems quickly and seamlessly, regardless of whether the interaction is with a human or an AI agent.
    • See how Decagon AI is expanding beyond customer support to offer customer concierge services, enabling personalized and friction-free interactions.
    • Learn how focusing on customer needs and building something people will pay for can simplify early-stage company challenges.

    Takeaways

    • Decagon AI’s agents use Agent Operating Procedures (AOPs) to mimic human-like interactions, which contrasts with older chatbot tech that relied on rigid decision trees.
    • Unlike traditional approaches, Decagon AI focuses on creating a single agent adept at following instructions, improving onboarding and iteration for customers.
    • Training smaller, fine-tuned models can outperform larger models on specific tasks, providing better performance and lower latency for customer interactions.
    • Customer support is evolving into a brand differentiator, with companies like Amazon and American Express setting the standard for excellent service and customer trust.
    • By making support more affordable, businesses can reinvest savings into providing more extensive support, leading to higher customer retention and satisfaction.
    • Early customer acquisition requires manual effort, including networking, cold emailing, and LinkedIn messaging, with a focus on charging for the software from day one.
    • Concentrating on building solutions that customers are willing to pay for within a short timeframe helps to validate business models and weed out unpromising ideas.

    Don’t forget to subscribe and leave us a review/comment on YouTube, Apple, or Spotify.

    It helps us reach more listeners and bring on more interesting guests.

    Stay Curious, Nataraj

  • Glean AI Founder Arvind Jain on the Future of Enterprise AI Agents

    Arvind Jain, CEO of Glean AI and co-founder of the multi-billion dollar company Rubrik, is a veteran of Silicon Valley’s most demanding engineering environments. After a decade as a distinguished engineer at Google, he experienced firsthand the productivity ceiling that fast-growing companies hit when internal knowledge becomes fragmented and inaccessible. This pain point led him to create Glean AI, initially conceived as a “Google for your workplace.” In this conversation with Nataraj, Arvind discusses Glean’s evolution from an advanced enterprise search tool into a sophisticated conversational AI assistant and agent platform. He dives into the technical challenges of building reliable AI for business, how companies are deploying AI agents across sales, legal, and engineering, and his vision for a future where proactive AI companions are embedded into our daily workflows. He also shares valuable lessons on company building and fostering an AI-first culture.

    👉 Subscribe to the podcast: startupproject.substack.com


    Nataraj: My wife’s company actually uses Glean, so I was playing around to prepare for this conversation. But for most people, if their company is not using it, they might not be aware of what Glean is and how it works. Can you give a pitch of what Glean does today and how it is helping enterprises?

    Arvind Jain: Most simply, think of Glean as ChatGPT, but inside your company. It’s a conversational AI assistant. Employees can go to Glean and ask any questions that they have, and Glean will answer those questions for them using all of their internal company context, data, and information, as well as all of the world’s knowledge.

    The only difference between ChatGPT and Glean is that while ChatGPT is great and knows everything about the world’s knowledge, it doesn’t know anything internally about your company—who the different people are, what the different projects are, who’s working on what. That context is not available in ChatGPT, and that’s the additional power that Glean has. That’s the core of what we do. We started out as a search company. Before these AI models got so good, we didn’t have the ability to take people’s questions and just produce the right answers back for them using all of that internal and external knowledge. In the past, I would call ourselves more like Google for your workplace, where you would ask questions, and we’ll surface the right information. But as the AI got better, we got the ability to actually go and read that knowledge and instead of pointing you to 10 different links for relevant content, we could just give you the answer right away. That’s the evolution of how we went from being a Google for your workplace to being a ChatGPT for your workplace. We’re also an AI agent platform. The same underlying platform that powers our ChatGPT-like experience is also available to our customers to build all kinds of AI agents across their different functions and departments and ensure that they’re delivering AI in a safe and secure way to their employees.

    Nataraj: You started in 2019 as an AI search company. Now, it feels very natural to build a ChatGPT-like product for enterprise because the value is instantaneous. But why did you pick the problem of solving enterprise AI search back then? It was not the hot thing or an obvious problem. What was your initial thesis?

    Arvind Jain: For me, it was obvious because I was suffering from that pain. Before Glean, I was one of the founders of Rubrik. We had great success and grew very fast; in four years, we had more than 1,500 people. As we grew, we ran into a productivity problem. There was one year where we had doubled our engineering team and tripled our sales force, but our metrics—how much code we were writing, how fast we were releasing software—were flatlining. We just couldn’t produce more, no matter how many people we had.

    One key reason was that the company grew so fast, and there was so much knowledge and information fragmented across many different systems. Our employees were complaining that they couldn’t find the information needed to do their jobs. They also didn’t know who to ask for help because there was no concept of who was working on what. When we saw this as the number one problem, I decided to solve it. My first instinct as a search engineer was to just go and buy a search product that could connect to all of our hundred different systems. That revealed to us that there was nothing to buy. There was no product on the market that would connect to all our SaaS applications and give people one place where they could simply ask their questions and get the right information. That was the origin. I felt nobody had tried to solve the search problem inside businesses, even though Google solved it in the consumer world. That got me excited. At that time, we were not thinking about building a ChatGPT-like experience; nobody knew how fast AI would evolve.

    Nataraj: I think almost pre-ChatGPT no one called AI as AI; it was called ML or some other technical term. I remember watching Google’s Pixel phone launches in 2020-2021, and they were doing a lot of work creating AI-first products very early on. But for some reason, the tragedy is Google is considered as not doing enough with AI. That was a narrative versus experience difference.

    Arvind Jain: In 2021, we launched our company to the public and we called ourselves the Work AI Assistant. We didn’t call ourselves a search product because we could do more than search. We could answer questions and be proactive. But it was a big problem from a marketing perspective because nobody understood what an assistant was. Nobody had really seen ChatGPT. It was a big failure, and we rebranded ourselves as a search company. Then, of course, with ChatGPT launching, people realized how capable AI is and that it can really be a companion, which is when we came back to our original vision.

    Nataraj: One CEO I spoke with mentioned that when you pick a really hard problem to work on, a couple of things become easier. It’s easier to convince investors because the returns will be very high if you’re successful, and you can attract people who want to solve hard problems. What’s your take on picking a problem when starting a company?

    Arvind Jain: I agree with that assessment. It’s not that you’re just trying to pick something super hard to solve as the main criterion. The main criterion still has to be that you add value and build a useful product. I’m always attracted to working on problems that are very universal, where we can bring a product to everybody. I like it both because of the impact you’re going to make and because building a startup is a difficult journey. You have to have something that makes you go through that, and for me, that something is impact—solving a problem that builds a product useful to a very large number of people.

    Second, when you think about solving problems, you have to think about your strengths. If you are a technologist, it’s a gift if the problem you’re trying to solve is a difficult one because you’ll be able to build that technology with the best team, and you won’t get commoditized quickly. With Search, I knew how hard and difficult the problem is. That was definitely an exciting part of why I started Glean—I knew that if we solved the problem, it would be a super useful product and a technology that others wouldn’t be able to replicate quickly.

    Nataraj: One thing I often see with tools like ChatGPT or Glean AI in the enterprise context is that when you’re working on certain types of data, it’s not enough to be 90% accurate. If I’m reporting revenue numbers to my leadership, I want it to be 99.9% accurate. Can you talk a little bit about the techniques you are using to reduce hallucination?

    Arvind Jain: AI is progressing quite quickly. There’s a lot of work that the platforms we use, like OpenAI, Anthropic, and Google, are doing. The models today are significantly different from the models we had last year in terms of their ability to reason, think, and review their own work, giving you more confident, higher accuracy answers. There’s a general improvement at the model layer, which is reducing hallucinations significantly.

    Then, coming into the enterprise, none of these models know anything about your company. When you solve for specific business tasks, the typical workflow is that you have a model that is thinking and retrieving information from your different enterprise systems. It uses that as the source of truth to perform its work. It becomes very important for your AI product to ensure that for any given task, you are picking the right, up-to-date, high-quality information written by subject matter experts. Otherwise, you end up with garbage in, garbage out. That is what most people are struggling with right now. They build AI applications, dabble in retrieving information, and then complain to their customers that their data is bad. That’s not the right answer because AI should be smart enough to understand what information is old versus new. As a human, you have judgment. You look for recent information. If you can’t find it, you talk to an expert. AI has to work the same way, and that is what Glean does. We connect with all the different systems, understand knowledge at a deep level, identify what is high quality and fresh, and ensure that models are being provided the right input so they can produce the right output. Our entire company is focused on that.

    Nataraj: You mentioned an AI agent platform. What are the typical use cases for which enterprises are creating agents?

    Arvind Jain: I’ll pick some key ones across a few departments. For sales teams, much of their time is spent on prospecting and lead generation. You can build a really good AI agent that does that faster and with higher quality than a human in many cases. People have built an agent on Glean where a salesperson says, “I would like to prospect these five accounts today,” and Glean will do a good amount of research, identify the right contacts, and generate personalized outreach messages. Our salespeople then review the work of AI with a thumbs up or thumbs down, and the messages get sent out. They can now prospect at a rate five times greater than before. Similarly, after a customer call, an agent can generate the meeting follow-up with action items and supporting materials, a task that used to take hours.

    For customer service, the job is to answer customer questions and help with support tickets. AI is pretty good at that. People have built agents to auto-resolve tickets. For engineering teams, AI can be a really good code reviewer. The Glean AI code review agent is quite popular; it’s the first one to review any code an engineer uploads and can handle basics like following style guides. The use cases are exploding. Last year it was all about engineering and customer support, but now it’s all departments. Legal teams are using a redlining agent that automatically creates the first version of redlines on third-party papers like MSAs or NDAs. It’s a huge time and cost saver. The democratization is happening now.

    Nataraj: It feels like a better way to describe agents is as ‘workflow agents,’ similar to Zapier but with an intelligence layer. This can only work if you’re integrated well with different apps, and today every company uses hundreds of SaaS tools. Can you talk about that challenge?

    Arvind Jain: You’re spot on. Agents have to work on your enterprise data, use model intelligence to mimic human work, and take actions in your enterprise systems. There’s a strong dependence on your ability to both read information and take actions. The good news for Glean is that we’ve been working on that for the last six and a half years. We have hundreds of these integrations and thousands of actions we can support, which becomes the raw material for building these agents.

    It’s interesting how hard it is to get that to work because enterprise systems are very bespoke. One major challenge is security and governance. You can’t have an agent platform where agents just read any data from any system. You have to follow the governance architecture and rules inside the company, like permissions and access control. You have to not only build these integrations but also work upwards from that to handle agent security and ensure you deliver the right data to these agents, not stale or out-of-date information.

    Nataraj: We’ve seen a few form factors: the chat bar, then RAG on the engineering side, and now everyone is talking about agents. What is the next form factor or use case you see coming up?

    Arvind Jain: One big shift from the initial ChatGPT-like experience, which is very conversational and reactive, is that agents are becoming more proactive. You can build an agent that runs every day or when a certain trigger condition is met. The next big thing I see is AI becoming even more proactive and embedded in your day-to-day life. You won’t think of AI as a tool you go to; it will just come to you when it detects you need help.

    Our vision for the future of work is that every person will have an incredible personal companion. A companion that knows everything about you and your work life: your role, your company, your OKRs, your career ambitions, your weekly tasks, your daily schedule. It’s walking with you, listening to every word you say and hear. With all that deep knowledge, it’s ready to help proactively. For example, imagine I’m commuting to work. My companion detects I’m unprepared for my meetings. It knows the commute is 38 minutes, so it can offer to brief me as I drive, summarizing the documents I need to read so I feel prepared for my day. That’s where we are headed. AI is going to become a lot more proactive.

    Nataraj: Does that mean Glean is going into cross-platform and cross-application to make us more productive? I can imagine a floating bubble on my mobile where I can just hit a button and narrate a task.

    Arvind Jain: Absolutely. We already have these different interfaces. Glean works on your devices—we have an iOS app and an Android app—and it gets embedded in other applications. If you’re building the world’s best assistant or companion for everybody at work, you have to travel with them. From a form factor perspective, you’re going to see more interesting devices, whether it’s a smartwatch or a smart pen. Our goal would be to make sure we’re running on them.

    Nataraj: I want to shift gears and talk about the business. You mentioned a marketing failure pre-ChatGPT, then a rebrand. Now that you’re a fast-growing company, with AI increasing productivity, does that mean you’re hiring less? If you had X salespeople at Rubrik, are you hiring fewer now for the same level of growth?

    Arvind Jain: First, a company is a group of people building something together. I firmly believe the scale of your business is proportional to the number of people you have. I don’t personally believe I can have a five-person company and generate a billion dollars. The productivity per employee is going to grow at a relatively linear pace. It’s just that to survive as a company, you have to do 10 times more work than you did before with the same number of people, because everyone is benefiting from AI.

    You have to be able to build products and experiences we couldn’t dream of before. You shouldn’t be thinking, “Can I have fewer people?” You have to think, “How do I achieve more with the number of people I can absorb?” You don’t have a choice. If you deliver the same kind of products as pre-AI, you won’t survive. We are growing very fast and investing in our people. We fundamentally believe the larger we are, the more we’ll be able to do. But at the same time, I’m a minimalist. I always try to ensure we are enabling every employee with the right tools and that they are fully capitalizing on AI to deliver way more than expected in the pre-AI world.

    Nataraj: What does it mean to be more AI-first? Do you do more AI education or align incentives?

    Arvind Jain: We started by just talking about the importance of AI in town halls. I don’t think we saw the results because people were too busy. Then we tried setting goals like “get 20% more productive,” which was a complete failure. Our third iteration was to just do one thing with AI. We don’t care about the ROI; just show that you’re trying to learn and get one meaningful thing done. That’s the top-down approach. From a bottom-up perspective, we allow people to bring in the right AI tools and we celebrate wins. We created a program called “Glean on Glean.” Every new hire, for their first month, ignores their hired role and instead plays with AI tools to build one workflow or agent. It’s been very effective, especially for new grads who don’t know the traditional way of working and are more well-versed with AI.

    Nataraj: What are one or two metrics you consistently watch that tell you whether you’re going in the right direction?

    Arvind Jain: For us, number one is customer satisfaction. We look at user engagement—how often our users use the product on a daily basis. That’s the most important metric. Number two, on the product side, we look at the type of things people are trying to do with it and if that set is expanding. For example, are more people becoming creators on Glean and building different sets of agents? From the business side, we look at standard metrics like retention rate and tracking our pipeline for demand. But as a CEO, probably the most important thing to watch is how our organization is feeling internally. What are the signs from the team? Are we ensuring we have mission alignment? Are people committed and motivated? Are we creating the right environment for them to grow and succeed? Those are the top-of-mind things for me.


    This conversation with Arvind Jain offers a clear look into how enterprise AI is moving beyond simple chat interfaces to create tangible value through sophisticated workflow agents. His insights provide a roadmap for how businesses can leverage AI to solve core productivity challenges.

    → If you enjoyed this conversation with Arvind Jain, listen to the full episode here on Spotify, Apple, or YouTube.
    → Subscribe to our Newsletter and never miss an update.

  • Decagon’s Ashwin Sreenivas: Building a $1.5B AI Support Giant

    At just under 30, co-founders Jesse Zhang and Ashwin Sreenivas have built Decagon into one of the fastest-growing AI companies, achieving a $1.5B valuation in just over a year out of stealth. Backed by industry giants like Accel and Andreessen Horowitz, Decagon is redefining enterprise-grade customer support with its advanced AI agents, earning a spot on Forbes’ prestigious AI 50 list for 2025. In this episode of the Startup Project, host Nataraj sits down with Ashwin to explore the secrets behind their explosive growth. They discuss how Decagon moved beyond the rigid, decision-tree-based chatbots of the past by creating AI agents that follow complex instructions, how they found product-market fit by tackling intricate enterprise workflows, and the company’s long-term vision to build AI concierges that transform customer interaction.

    👉 Subscribe to the podcast: startupproject.substack.com

    Nataraj: So let’s get right into it. What is Decagon AI? What does the product do, and talk a little bit about the technology behind Decagon.

    Ashwin Sreenivas: You can think of Decagon as an AI customer support agent. For our customers, Decagon talks directly to their customers and has great conversations with them over chat, phone calls, SMS, and email. Our goal is to build these AI concierges for these customers. This idea of AI for customer support isn’t necessarily new; you’ve had chatbots for 10 years now, probably. But the thing that’s really different this time is if you look at the chatbots from as late as three or four years ago, it wasn’t a great experience. The reason it wasn’t a great experience is because you had these decision trees that everybody had to build, and it was a pain to build them and a pain to maintain. From a customer perspective, if you have a question or a problem that is one degree off from the decision tree that was built out, it was completely useless. That’s when you have people saying, “agent, agent, agent.” The thing that’s changed, and a lot of the core of what we’ve built, is a way to train these AI agents like humans are trained. Humans have standard operating procedures that they follow, and our AI agents have agent operating procedures that they follow. We’re able to essentially build these AI agents that can have much more fluid, natural conversations like a human agent would.

    Nataraj: Talking a little bit about the products, you mentioned chat, phone calls, emails. Do you have products for everything? If a company is coming to adopt Decagon, are they first starting with chat and then expanding to everything else? How does the customer journey look?

    Ashwin Sreenivas: This is actually very driven by our customers. For a lot of the more tech-native brands, think like a Notion or a Figma, you would never think about picking up the phone and calling them. You’d want to chat or email. Whereas some of our other customers like Hertz, you don’t really email Hertz. If you need a car, you’re going to call them up on the phone. So a lot of our deployment model is guided by our customers and how their customers want to reach out to them. Typically, most customers start with the method by which most of their customers reach out, and then they expand to all the other ones. It’s very common to start with chat and then expand to email and voice, or start with voice and expand to chat and email.

    Nataraj: I want to double-click on the point you mentioned about the decision tree model. I think around 2015, during the Alexa peak, everyone was building chatbots. I remember the app ecosystem where you had to build apps on Alexa or Microsoft’s Cortana. Conversational bots were the hype for two or three years, but they quickly stagnated when we realized all we were doing was replicating the “press one for this, press two for this” system on a chat interface. You define a decision tree, and anything outside of that is basically an if-else command line that ends with a catch-all driving you to a human. There are obviously a lot of players in customer support with existing tools. Do they have a specific edge on creating something like what Decagon is doing because of their existing data?

    Ashwin Sreenivas: No, I actually think, interestingly enough, because these customer service bots went through a few generations of tech, the tech is different enough that you don’t get too much of an advantage starting with the old tech. In fact, you start with a lot of tech debt that you then have to undo. Let’s say 10 years ago, you had to start with explicit decision trees where you program every single line. Then about five years ago, you had the Alexas of the world. It was a little bit of an improvement, but essentially all it did was allow a user to express what they want. They could say, “I want to return my order,” and the models were good at detecting intent—classifying a natural language inquiry into one of 50 things it knew how to do. But beyond that, everything became decision trees. The thing is now with these new models, because you have so much flexibility and the ability for them to follow more complex instructions and multi-step workflows, you can actually rebuild this from the ground up. It’s not just classifying an intent and then following a decision tree; we want the whole thing to be much more interactive for a better user experience. We had to rebuild it to ask, how does a human being learn? You have standard operating procedures. You say, “Hey, if a customer asks to return their order, first check this database to see what tier of customer they are. If they’re a platinum customer, you have a more generous return policy. If they’re not, you have a stricter one. You need to check the fraud database.” You go through many of these steps and then work with the customer. The core of what we’ve done is build out AI agents that can follow instructions very well, like a human does.

    Nataraj: This whole concept of AOPs (Agent Operating Procedures) that you guys introduced is very fascinating. You mentioned SOPs, which humans read, and then you have AOPs, which is sort of a protocol for the agent. Who is converting the SOP into an AOP? How easy is it to create this agent? Are you giving a generic agent that adapts to a customer’s SOP, or do I as a customer have to build the agent?

    Ashwin Sreenivas: The core Decagon product is one agent that is very good at following instructions and AOPs. We built this for time to value. If you have to train an agent from scratch for every single customer, it’s going to take a lot of time for that customer to get onboarded. And two, it’s very difficult for that customer to iterate on their experiences. If you build one agent, like a human, that’s very good at following instructions, they can come to that customer and say, “Here are the instructions I need to follow,” and you can be up and running immediately. In terms of how these AOPs are created, most customers tend to have some set of SOPs already, and AOPs are actually extremely close to these. The only thing you need to change is you need to instruct it on how to use a company’s internal systems. It’s 99% English, and then there are a few keywords to tell it, “At this point, you need to call this API endpoint to load the user’s details,” or “At this point, you need to issue a refund using the Stripe endpoint.” That’s the primary difference from SOPs.

    Nataraj: If you talk about the technology stack, are you using external models, or are you training your own models? What is the difference between a base model and what you’re delivering to a customer?

    Ashwin Sreenivas: We spend a lot of time thinking about models. We do use some external models, but we also train a lot of models in-house. The reason is, if you’re using external models, most of what you can do is through prompt tuning, and we found that models are only so steerable with just prompt tuning. We’ve spent a lot of time in-house taking open-source models and fine-tuning them, using RL on top of them, and using all of these techniques to steer them. To get these models to follow instructions well, you have to decompose the task. A customer comes in with a question, and I have all of these AOPs I could select from. The first decision is, is any of these AOPs relevant? If a user is continuing the conversation, are they on the same topic or should I switch to another AOP? At every step, there are a hundred micro-decisions to make. A lot of what we do is break down these micro-decisions and have models that are very, very good at each one.

    Nataraj: The industry narrative has been that only companies with very large capital can train models. Are you seeing that cost drop? When you mentioned you’re training open-source models, is that becoming more accessible?

    Ashwin Sreenivas: We’re not pre-training our models from scratch. We take open-source models and then do things on top of those. The thing that has changed dramatically is that the quality of the open-source models has gotten so good that this is now viable to do pretty quickly.

    Nataraj: Which models are better for your use case?

    Ashwin Sreenivas: We use a whole mix of models for different things because we found that different base models perform differently for different tasks. The Google Gemma models are great at very specific things. The Llama models are great at very specific things. The Qwen models are great at very specific things. Even for one customer service message that comes in, it’s not one message going to one model. It’s one message going to a whole sequence of models, each of which is good at doing different things to finally generate the final response.

    Nataraj: It’s often debated that as bigger models like GPT-5 or Gemini improve, they will gain the specialized capabilities that smaller, fine-tuned models have. What is the reality you’re seeing?

    Ashwin Sreenivas: I would push back against that argument for two reasons. Number one, while the bigger models will all have the capabilities, the level of performance will change. If you have a well-defined task, you can have a model that’s 100 times smaller achieve a higher degree of performance if you just fine-tune it on that one task. I don’t want it to code in Python and write poems; I just want it to get really good at this one thing. When measured on that one task, it will probably outperform models a hundred times its size. Number two, which is equally important, is latency. A giant model might take five seconds to generate a response. A really small model cuts that time by a factor of 10. Over text, five seconds might not matter, but on a phone call, if it’s silent for five seconds, that’s a really bad experience. For that, you have to go toward the smaller models.

    Nataraj: Can you talk about why you and your co-founder picked customer service as a segment when you decided to start a company?

    Ashwin Sreenivas: When we started this company, it was around the time when GPT-3.5 Turbo and GPT-4 were out. We were looking at the capabilities and thought, wow, this is getting just about good enough that it can start doing things that people do. As we looked at the enterprise, we asked, where is there a lot of demand for repetitive, text-based tasks? Number one was customer support teams, and number two was operations teams. As we talked to operations leaders, the number one demand was in customer support. They told us, “Look, we’re growing so quickly, our customer support volume is scaling really quickly, which means we need to hire a lot more people, and we can’t afford to do that. We are desperate.” Initially, it looked like a very crowded space, but as we talked to customers, we found it was crowded for smaller companies with simple tasks, where 90% of their volume was, “I want to return my order.” But for more complex enterprises, there wasn’t anything built that could really follow their intricate support flows. That was the wedge we took—to build exclusively for companies with very complex workflows. The other thing that was interesting was our long-term thinking. If you build an agent that can instruction-follow very well, you enable businesses to eventually grow this from customer support into a customer concierge.

    What I mean by that is, let’s say you want to fly from San Francisco to New York. You go to your favorite airline’s website, type in your search, and it gives you 30 different flights to pick from. That’s a lot of annoying steps. A much better experience would be to text your airline and say, “I want to go to New York next weekend.” An AI agent on the other side knows who you are, your preferences, and your budget. It looks through everything and says, “Hey, here are two options, which one do you like?” This AI agent also knows where you like to sit and says, “By the way, I have a free upgrade available for you. Is that okay?” You say yes, and it says, “Booked.” The big difference is this is a much more seamless experience. Most websites today shift the burden of work onto the user. Now, it shifts to a world where you express your intent to an AI agent that then does the work for you. That was a really interesting shift for us. Building these customer support agents is the first step to building these broader customer concierges.

    Nataraj: How did you acquire your first five customers? What did that journey look like?

    Ashwin Sreenivas: Early customer acquisition is always very manual. There’s no silver bullet. It’s just a lot of finding everyone in your networks, getting introductions, and doing cold emailing and cold LinkedIn messaging. It’s brute force work. But the other thing for us is we never did free design pilots; we charged for our software from day one. This doesn’t mean we charged them on day one of the contract. We’d typically say, “There’ll be a four-week pilot. At the end of four weeks, we’ll decide upfront if you like it, this is what it’s going to cost.” We never had an open-ended, long-term period where we did things for free because, in the early days, the number one thing you’re trying to validate is, am I building something that people will pay money for? If it’s truly valuable, you should be able to tell your potential customer, “Hey, if I accomplish A, B, and C, will you pay me this much in four weeks?” If it’s a painful enough problem, they should say yes. This helped us weed through bad business models and bad initial ideas quickly.

    Nataraj: What business impact and success metrics do your customers look at when using Decagon?

    Ashwin Sreenivas: Customers think about value in two ways primarily. One is what percentage of conversations we are able to handle ourselves successfully—meaning the user is satisfied and we have actually solved their problem. If we can solve a greater percentage of those, fewer support tickets ultimately make their way to human agents, who can then focus their time on more complicated problems. The second benefit, which was a little counterintuitive, was that a lot of these companies expanded the amount of support they offered. It’s not that companies want to minimize support; they want to give as much as they can economically. If it cost me $10 for one customer interaction and all of a sudden that becomes 80 cents, I’m not just going to save all that money. I’m going to reinvest some of that in providing more support. We’ve noticed that their end customers actually want that increased level of support. So now, instead of phone lines being open only from 9 a.m. to 5 p.m., it becomes 24 hours a day. Instead of offering support only to paid members, we offer support to everybody. There’s this latent demand for increased support, and by making it much cheaper, businesses can now offer more. At the end of the day, this leads to higher retention and better customer happiness.

    Nataraj: You also have support for voice agents, which is particularly interesting. What has the traction been like? Do customers realize they’re talking to an AI?

    Ashwin Sreenivas: In general, all our voice agents say, “Hi, I’m a virtual agent here to help you” or something like that. But the other interesting thing is most customers calling about a problem don’t want to talk to a human; they want their problem solved. They don’t care how, they just want it solved. For us, making it sound more human is not about giving the impression they’re talking to a human; it’s to make the interaction feel more seamless. You want responses to be fast. At the end of the day, the primary goal is, how can we solve the customer’s problem? Even if the customer is very aware they’re talking to an AI agent, but that agent solves their problem in 10 seconds, that’s a good experience. Versus talking to a human who takes 45 minutes, which is a bad experience. We have several customers now where the NPS for the voice agents is as good or higher than human agents because if the AI agent can solve their problem, it solves it immediately. And if it can’t, it hands it over to a human immediately. Either way, you end up having a reasonably good experience.

    Nataraj: Has there been a drop in hiring in support departments? Are agents replacing humans or augmenting them?

    Ashwin Sreenivas: It really depends on the business. If AI agents can handle a bigger chunk of customer inquiries, you can do one of three things. One, you can handle more incoming support volume. You put it on every page, you give support to every member, you do it 24 hours a day. Your top-line support volume will go up, but your customers have a better experience, and you can keep the number of human agents the same. Other people might say, “I’m going to keep the amount of customer support I do the same. There are fewer tickets going to human agents, so now I can have those agents do other higher-value things,” like go through the high-priority queue more quickly or move to a different part of the operations team.

    Nataraj: Can you talk about the UX of the product? People have different definitions of agents. What kind of agent are we talking about here?

    Ashwin Sreenivas: Interacting with Decagon is exactly like interacting with a human being. From the end user’s perspective, it’s as though they were talking to a human over a chat screen or on the phone. Behind the scenes, the way Decagon works is that each business has a set of AOPs that these AI agents have access to. The AOPs allow the agents to do different things—refund an order, upgrade a subscription, change billing dates. The Decagon agent is just saying, “Okay, this question has come in. Do I need to work through an AOP with the customer to solve this problem?” And it executes the AOPs behind the scenes.

    Nataraj: Before your product, a support manager would look at their team’s activities. How does that management look now on your customer’s side?

    Ashwin Sreenivas: There’s been an interesting shift. Rather than training new human agents, I’ve trained this AI agent once, and now my job becomes, how can I improve this agent very quickly? We ended up building a number of things in the product to support this. If the AI agent had one million conversations this month, no human can read through all of that. We had to build a lot of product to answer, what went well? What went poorly? What feedback should I take to the rest of the business? What should I now teach the agent so that instead of handling 80% of conversations, it can handle 85%? The primary workflow of the support manager has changed from supervising to being more of an investigator and agent improver, asking, “What didn’t go well and how can I improve that?”

    Nataraj: Are the learnings from one mature customer flowing back into the overall agent that you’re building for all companies?

    Ashwin Sreenivas: We don’t take learnings from one customer and apply them to another because most of our customers are enterprises, and we have very strong data and model training guarantees. But the learning we can take is what kinds of things people need these agents to do. For instance, we learned early on that sometimes an asynchronous task needs to happen. Decagon didn’t have support for that, so we realized that use case was important and extended the agent to be able to do tasks like that. It’s those kinds of learnings on how agents are constructed that we can take cross-customer. But for a lot of these customers, the way they do customer service is a big part of their secret sauce, so we have very strong guarantees on data isolation.

    Nataraj: How are you acquiring customers right now?

    Ashwin Sreenivas: We have customers through three big channels. Number one is referrals from existing customers. Support teams will often say, “Hey, we bought this thing, it’s helping our support team,” and they’ll tell their friends at other companies. Number two is general inbound that we get because people have heard of Decagon. And three, we also have a sales team now that reaches out to people and goes to conferences.

    Nataraj: Both you and your co-founder had companies before. How did the operating dynamics of the company change from your last company to now? Did access to AI tools increase the pace?

    Ashwin Sreenivas: A lot of things changed. For both of our first companies, we were both first-time founders figuring things out. I think the biggest thing that changed was how driven by customer needs we were. We didn’t overthink the exact right two-year strategy or how we were going to build moats over three years. We said, the only thing we’re going to worry about now is, how do we build something that someone will pay us real money for in four weeks? That was the only problem. That simplifies things, and we learned that all the other things you can figure out over time. For instance, with competitive moats, when we sold a deal in the early days, we would ask, “Why did you buy us?” They would tell us, “This competitor didn’t have this feature we needed.” And we were like, great, so we should do more of that because clearly this is valuable.

    Nataraj: It’s almost like you just listen to the market rather than putting your own thesis on it.

    Ashwin Sreenivas: Yeah. I think there was a very old Marc Andreessen essay about this: good markets will pull products out of teams. The market has a need, and the market will pull the product out of you.

    Nataraj: What’s your favorite AI product that you use personally?

    Ashwin Sreenivas: I use a number of things. For coding co-pilots, Cursor and Supermaven are great. For background coding agents, Devin is great. I like Granola for meeting notes. I used to hate taking meeting notes, and now I just have to jot down things every now and then. I think that captures most of what I do because either I’m writing code or talking to people, and that has become 99% of my life outside of spending time with my wife.

    Nataraj: Awesome. I think that’s a good note to end the conversation. Thanks, Ashwin, for coming on the show.

    Ashwin Sreenivas: Yeah, great being here. Thanks for having me.

    This conversation with Ashwin Sreenivas provides a masterclass in building a category-defining AI company, highlighting the power of focusing on genuine customer pain points and the massive potential for AI to create more seamless, personalized business interactions. His insights reveal a clear roadmap for how AI is moving from simple automation to becoming a core driver of customer experience.

    If you enjoyed this conversation with Ashwin Sreenivas, listen to the full episode here on Spotify, Apple or YouTube.
    Subscribe to our newsletter: startupproject.substack.com

  • From Forbes to Founder: Alex Konrad on new media, creator economy, AI tools for journalists, Midas List secrets, and why traditional media is losing to independent voices

    From Forbes to Founder: Alex Konrad on new media, creator economy, AI tools for journalists, Midas List secrets, and why traditional media is losing to independent voices

    Discover the evolving roles of traditional vs. new media in tech and gain insights into effective content creation strategies.

    About the episode:

    In this episode, Nataraj hosts Alex Konrad, founder and editor of Upstart Media, to explore the shifting dynamics of tech media. Alex shares his experiences at Forbes, defines the roles of traditional and new media, and discusses the challenges and opportunities for new tech publications. He also dives into storytelling trends, content creation strategies, and the impact of AI on the media landscape. Learn how new media publications can thrive in a decentralized ecosystem and why staying close to the "engine room" of innovation is crucial for success.

    What you’ll learn

    • Identify the key differences between traditional and new media in the tech landscape.
    • Understand the challenges and opportunities for new tech publications in a decentralized media ecosystem.
    • Discover effective content creation and distribution strategies for building a successful media brand.
    • Learn how to build a pipeline of stories and get sources as a media startup without the brand of a larger publication.
    • Explore the impact of AI on the media landscape and its potential to empower independent creators.

    About the Guest and Host:

    Alex Konrad: Founder and editor of Upstart Media, a tech publication focused on the startup ecosystem.Connect with Alex: → LinkedIn: https://www.linkedin.com/in/alexrkonrad/→ Website: https://www.upstartsmedia.com/Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor. → LinkedIn: https://www.linkedin.com/in/natarajsindam/ → Substack: ⁠https://startupproject.substack.com/⁠In this episode, we cover

    • (00:01) Introduction to Alex Konrad and Upstart Media
    • (01:24) Alex’s experience at Forbes and its role in today’s tech media
    • (03:43) Defining traditional vs. new media and the rise of independent content creators
    • (06:25) The challenges and differences between working at a large publication vs. running a startup media company
    • (07:01) The components of a new tech publication in 2025: Substack, YouTube, podcasts, and events
    • (08:41) Content strategy: balancing consistency, quality, and multiple platforms
    • (11:05) Admired content creators and their successful practices in the media space
    • (13:12) The importance of an existing brand and network for new media ventures
    • (15:49) The creator economy’s power law and the challenges of standing out
    • (16:56) Distribution strategies: leveraging LinkedIn, X, and Substack recommendations
    • (18:56) The evolving landscape of social media and the rise of Threads
    • (22:16) Bandwidth challenges and the need for AI-powered tools for content creation and distribution
    • (25:05) The Midas List: its methodology, significance, and controversies
    • (30:41) Other influential lists and their impact on the tech industry
    • (32:45) Identifying potential niches and innovative approaches in the media space
    • (36:04) Alex’s insights on AI-driven workflows and automation in various industries
    • (38:48) Nataraj’s perspective on AI as a transformative force and its potential impact
    • (41:11) The importance of being close to the "engine room" of innovation
    • (42:55) Building a pipeline without a big brand name and leveraging word-of-mouth

    Don’t forget to subscribe and leave us a review/comment on YouTube Apple Spotify or wherever you listen to podcasts.

    #Startup #TechMedia #NewMedia #ContentCreation #AI #VentureCapital #Startups #Forbes #Substack #Podcast #SocialMedia #Threads #MidasList #Innovation #Entrepreneurship #MediaStrategy #TechTrends #DigitalMedia #ContentMarketing #UpstartMedia

  • AI Hype, Future Trends & Research Trends with Best Selling Author of The Master Algorithm Pedro Domingos

    AI Hype, Future Trends & Research Trends with Best Selling Author of The Master Algorithm Pedro Domingos

    This week we are republishing one of our favourite conversations that didn’t get much visibility when it first came out.

    About the episode:

    Nataraj hosts Pedro Domingos, a distinguished figure in AI and machine learning, to discuss the current state of AI, hype cycles, and future trends. Pedro shares insights from his early career, his widely-read book "The Master Algorithm," and his satirical novel "2040." He offers a critical perspective on LLMs, the AI safety debate, and what truly drives progress in the field, while providing guidance on navigating the complex information landscape and choosing impactful research problems. The discussion dives deep into the societal impact of AI, the importance of critical thinking, and the future of AI research, offering a unique blend of technical insights and thought-provoking commentary. Why should you care? Understand the reality behind the AI hype, identify future trends, and learn how to navigate the complex world of AI research.

    What you’ll learn  

        – Understand the reality of AI progress, separating it from the hype and misconceptions surrounding LLMs and AGI [1, 2].

        – Learn about the history of AI, including Herb Simon’s Nobel Prize and the evolution of machine learning as a subfield of AI .

        – Discern the truth about AI, including the importance of machine learning, reasoning, and other AI fields .

        – Identify the key factors driving investment in AI and the potential risks of the current AI bubble, including how technological progress can be shaped using S-curves .

        – Gain insights into the future of AI research, exploring the limitations of transformers and the need for diverse research directions .

        – Explore the concept of "The Master Algorithm" and how it provides a comprehensive view of AI, beyond narrow slivers of research .

        – Learn practical tips for navigating the information landscape, including identifying reliable sources, being a critical consumer of information, and maximizing your "Sharpe ratio" in terms of impact .

        – Discover the importance of mentorship and community in AI research, including attending conferences, engaging in discussions, and learning the empirical method of machine learning .

    About the Guest and Host:

    Guest Name: Pedro Domingos is a professor emeritus of computer science and engineering from the University of Washington and a leading expert in artificial intelligence.

    Connect with Guest: 

    → LinkedIn: https://www.linkedin.com/in/pedro-domingos-77b183/

    Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor. 

    → LinkedIn: https://www.linkedin.com/in/natarajsindam/  

    → Twitter: https://x.com/natarajsindam

    → Substack: ⁠https://startupproject.substack.com/⁠

    → Website: ⁠⁠⁠https://thestartupproject.io⁠⁠⁠

    In this episode, we cover  

        (00:01) Introduction and Guest Introduction

        (01:26) Pedro’s early career and why he chose machine learning

        (03:51) Nobel Prizes and AI

        (07:12) AI vs Machine Learning

        (08:53) LLMs and the current AI hype cycle

        (14:17) Comparing Models to Human Intelligence

        (16:56) Investment in AI and progress

        (21:22) Thoughts on OpenAI

        (25:56) Investing in talent

        (29:05) AI Safety

        (35:10) Master Algorithm

        (40:27) Jensen Wong and NVIDIA’s Pivot

        (43:59) AI Chip Projects

        (47:05) 2040

        (52:19) How AI Will Change Society

        (55:12) Recommendation Systems

        (59:56) Sources of Consumption

        (01:07:50) What Pedro is consuming

        (01:09:59) Advice to those interested in AI research

        (01:12:41) Mentors

        (01:15:47) Advice to Researchers

    Don’t forget to subscribe and leave us a review/comment on YouTube Apple Spotify or wherever you listen to podcasts.

    #AI #MachineLearning #DeepLearning #ArtificialIntelligence #MasterAlgorithm #2040 #PedroDomingos #Research #Innovation #Tech #Technology #Podcast #Startup #Entrepreneurship #VentureCapital #LLMs #AGI #AISafety #FutureofAI

  • Offline AI, personal AI assistants, edge computing revolution, compression of neural networks, privacy, compute efficiency, AI infrastructure startups | David Stout, Founder of webAI

    Offline AI, personal AI assistants, edge computing revolution, compression of neural networks, privacy, compute efficiency, AI infrastructure startups | David Stout, Founder of webAI

    Discover how webAI is revolutionizing AI by enabling powerful models to run directly on devices.

    About the episode:

    Nataraj chats with David Stout, founder of webAI, about bringing AI infrastructure to edge devices. David shares his journey from a Michigan farm to pioneering techniques for compressing AI models onto phones, laptops, and even airplanes. They discuss the importance of data privacy, real-time processing, and the shift from cloud-based AI to a web of interconnected, specialized AI systems working across devices. Learn about WebAI’s vision for the future of AI and how they achieved a $700 million valuation.

    What you’ll learn

    – Understand the limitations of cloud-based AI and the advantages of edge computing for data privacy and real-time processing.

    – Discover how webAI compresses and optimizes large AI models to run efficiently on everyday devices like phones and laptops.

    – Learn about the potential of a decentralized “web of models” and how it could revolutionize various industries.

    – Explore the key factors driving webAI’s success, including their focus on horizontal technology and support for diverse AI models.

    – Gain insights into the future of AI hardware and software.

    About the Guest and Host:

    Guest Name: David Stout, Founder of webAI, pioneering AI infrastructure for edge devices.

    Connect with Guest:

    → LinkedIn: https://www.linkedin.com/in/davidpstout/ 

    → Website: https://www.webai.com/

    Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor.

    → LinkedIn: https://www.linkedin.com/in/natarajsindam/

    → Substack: ⁠https://startupproject.substack.com/⁠

    In this episode, we cover

    (00:01) Introduction and Guest Introduction

    (01:35) David’s Journey in AI and Machine Learning

    (03:51) Bringing AI Models to Devices

    (05:49) Applications of AI on Devices

    (08:10) The Thesis for Starting webAI

    (11:16) Optimizing for Specialized Models

    (15:02) Web AI’s Customers and Use Cases

    (17:00) Hardware Optimization for webAI

    (19:12) Lightweight Personal Models

    (22:55) The Trajectory of Foundational Model Companies and AGI

    (31:48) Emergent Behavior and Intelligence

    (35:17) Prompts Don’t Pay Bills

    (41:22) Meta’s Approach to Recruiting and Spending

    (44:51) Form Factors in AI

    (51:04) Surprising AI Products

    (52:42) Breakthroughs in AI Research

    (56:31) XAI’s Models

    (01:03:24) The Future of Google and Search

    (01:06:39) Finding David and webAI

    Don’t forget to subscribe and leave us a review/comment on YouTube Apple Spotify or wherever you listen to podcasts.

    #AI #EdgeComputing #MachineLearning #DataPrivacy #WebAI #ArtificialIntelligence #DecentralizedAI #AIInfrastructure #Startup #Technology #Innovation #Podcast #Entrepreneurship #TechStartups #DeepLearning #ModelCompression #AIModels #Inference #NatarajSindam #StartupProject

  • Fixing Broken Meetings, Managing Calendars with AI, and Redesigning the Future of Work | Matt Martin CEO & Co-Founder Clockwise

    Fixing Broken Meetings, Managing Calendars with AI, and Redesigning the Future of Work | Matt Martin CEO & Co-Founder Clockwise

    In this episode, Nataraj welcomes Matt Martin, CEO of Clockwise, to explore the science of smart scheduling. Discover how Clockwise uses AI to optimize calendars, reduce meeting overload, and create more focused work time. Matt shares insights on balancing collaboration with individual productivity, the impact of remote work on meeting culture, and the future of AI-powered time management. Learn actionable strategies to transform your workday and boost your team’s efficiency. Why care? Because reclaiming your time is the first step to achieving your goals.

    ### What you’ll learn

    – Implement AI-driven tools to analyze and optimize your schedule for peak productivity.

    – Balance maker and manager schedules to accommodate different work styles within your team.

    – Identify and eliminate unnecessary meetings to free up valuable time for focused work.

    – Leverage asynchronous communication methods to reduce the reliance on synchronous meetings.

    – Understand the impact of remote work on meeting culture and adapt your strategies accordingly.

    – Measure the ROI of productivity tools to ensure they are contributing to your bottom line.

    – Explore the potential of AI agents to automate scheduling and optimize workflows.

    – Discover the importance of memory and context in AI assistants for the workplace.

    ### About the Guest and Host:

    Guest Name: Matt Martin, CEO of Clockwise, helping individuals and teams create smarter schedules with AI.

    Connect with Guest:

    → LinkedIn: https://www.linkedin.com/in/voxmatt/

    → Website: https://www.getclockwise.com/

    Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor.

    → LinkedIn: https://www.linkedin.com/in/natarajsindam/

    → Twitter: https://x.com/natarajsindam

    → Substack: ⁠https://startupproject.substack.com/⁠

    ### In this episode, we cover

    (00:01) Introduction to Matt Martin and Clockwise

    (00:58) What is Clockwise and how customers use it

    (02:19) Optimizing meetings in organizations

    (02:56) Maker Schedule versus Manager Schedule

    (05:38) Trends in non-scheduled meetings

    (07:33) The shift in adopting new SaaS products

    (08:43) Impact of zero interest rate environment on SaaS buying

    (11:32) AI agents and their promises

    (12:49) Measuring efficiency gains with AI tools

    (14:14) Outcome-based pricing models

    (17:46) How Clockwise leverages AI in its product

    (20:51) MCP vs APIs

    (22:26) The trend of half-baked tools

    (24:54) Rethinking fundamental apps with AI

    (26:56) Adding AI features on current products

    (29:03) Power of products like Zapier and Enneken with AI

    (33:08) Categories of AI companies that are likely to succeed

    (36:49) AI assistant for your workplace

    (39:26) User interface

    (46:01) How to discover Matt and Clockwise

    Don’t forget to subscribe and leave us a review/comment on YouTube Apple Spotify or wherever you listen to podcasts.

    #Clockwise #AIScheduling #CalendarOptimization #ProductivityTips #TimeManagement #MeetingStrategy #RemoteWork #HybridWork #AITools #SaaS #Entrepreneurship #StartupProject #NatarajSindam #Podcast #TechInnovation #WorkflowAutomation #ArtificialIntelligence #ProductivityHacks #MattMartin