Tag: AI

  • Beyond Feature-Pushing: The Product Management Behind Relay.App’s AI Agent Vision

    Beyond Feature-Pushing: The Product Management Behind Relay.App’s AI Agent Vision

    In the ever-evolving landscape of AI and automation, building a product that truly resonates with users is a challenging yet rewarding odyssey. In a recent episode of the Startup Project podcast, Nataraj engaged in a fascinating conversation with Jacob Bank, the founder and CEO of Relay.App, shedding light on the intricate product development journey of an AI agent building platform. This blog post delves into the key product development insights gleaned from their discussion, offering valuable lessons for product managers, engineers, and entrepreneurs navigating the complex terrain of AI-first product creation.

    The Winding Road to Product Clarity:

    Relay.App’s journey is a testament to the power of iteration and the importance of listening to the market. As Bank candidly shared, the company’s early days were marked by a period of experimentation and exploration. Founded in 2021 with the vision of enhancing cross-tool coordination using AI, the team initially ventured down multiple paths, building eight or nine different product prototypes, each exploring different facets of the core concept. This period of “wandering in the desert” was crucial in honing their understanding of customer needs and refining their product vision.

    The initial thesis, predating the widespread adoption of LLMs, centered on using AI to bridge the gaps between disparate tools. However, the breakthrough came when Relay.app focused on capturing repeated tasks that combined automated components with human judgment. This shift led to the development of a workflow tool positioned between Zapier-style automation and Asana-style task management.

    The “Duct Tape” Dilemma and the Pivot to AI Agents:

    While the workflow product garnered some traction, the team recognized a crucial limitation: positioning themselves as an automation tool inadvertently limited their audience. The label “no-code workflow automation” often attracts a niche segment of tech-savvy users, while the broader opportunity lies in empowering every business to leverage AI for increased productivity.

    This realization spurred a strategic evolution, transitioning Relay.app from an AI-powered automation platform to an AI agent building platform. This shift wasn’t merely semantic; it represented a fundamental change in product philosophy. Instead of simply connecting tools, Relay.app aimed to provide a platform where users could create intelligent agents that proactively work on their behalf.

    Integrations: A Core Competency, Not an Afterthought:

    A recurring theme throughout the conversation was the critical importance of integrations. Bank emphasized that integrations are not a mere checkbox feature but a skilled labor requiring top-tier engineering talent. Building robust and reliable integrations with a wide array of tools is essential for AI agents to effectively perform their tasks.

    Relay.app currently boasts around 120 native integrations and is strategically working toward expanding this number to 300-500. The focus is on providing comprehensive coverage across essential business tool categories, including email, calendar, messaging, CRM, and marketing automation. Bank’s belief is that agents will only be as useful as their ability to interact with the existing tools in your workflow.

    Human-in-the-Loop: Building Trust and Control:

    As AI becomes increasingly integrated into our workflows, the role of human oversight remains paramount. Bank emphasized the necessity of a human-in-the-loop mechanism, allowing users to review and provide feedback on the agent’s planned actions before they are executed.

    This approach not only builds trust in the AI system but also allows for continuous learning and improvement. By incorporating user feedback, the agent can refine its behavior and better align with human intent. Furthermore, should an AI deviate, it is important to create workflows in which a user can course-correct or take-over in real-time. The balance of delegation and human-interaction are vital for establishing true AI augmentation.

    Product-Led Content and the Power of Community:

    Relay.app’s go-to-market strategy revolves around product-led content, showcasing the tangible benefits of AI agents through compelling use cases. Bank himself actively creates content, including LinkedIn posts and YouTube tutorials, demonstrating how users can build AI agents to solve specific problems.

    This approach not only drives product awareness but also fosters a thriving community of users who share their own creations and insights. By empowering users to create and share templates, Relay.app has created a virtuous cycle of product adoption and community engagement. This product-led content drives organic growth by educating and empowering users.

    The Future of Product Development: AI-Powered Teams:

    Bank envisions a future where product teams are leaner, more agile, and empowered by AI. Instead of relying on large teams with specialized roles, he believes that individuals will increasingly take on a “player-coach” role, combining strategic vision with practical execution.

    This shift is enabled by AI agents, which can automate mundane tasks and free up human employees to focus on higher-level thinking and problem-solving. The key lies in identifying the right tasks for AI automation and designing workflows that seamlessly integrate human expertise.

    Lessons Learned and the Path Forward:

    Jacob Bank’s product development journey with Relay.App offers valuable insights for anyone building AI-first products. The importance of iteration, customer feedback, robust integrations, human-in-the-loop design, and product-led content cannot be overstated.

    As the AI landscape continues to evolve, product teams must embrace a flexible and adaptable approach, constantly refining their products and strategies to meet the ever-changing needs of their users. By focusing on building trustworthy and valuable AI agents, they can unlock new levels of productivity and innovation.

    Ultimately, Relay.App’s experience underscores the importance of moving beyond hype and focusing on delivering tangible value. By embracing a user-centric approach and prioritizing robust integrations, human oversight, and product-led growth, product teams can successfully navigate the challenges of the AI revolution and build products that truly transform the way we work.


    Nataraj is a Senior Product Manager at Microsoft Azure and the Author at Startup Project, featuring insights about building the next generation of enterprise technology products & businesses.


    Listen to the latest insights from leaders building the next generation products on Spotify, Apple, Substack and YouTube.

  • How to Build a Thriving Venture Firm with a Billion Dollars in Assets | David Blumberg

    How to Build a Thriving Venture Firm with a Billion Dollars in Assets | David Blumberg

    David Blumberg, a seasoned investor with decades of experience in early-stage tech companies, recently joined Nataraj on the Startup Project Podcast to discuss his investment journey, successes, misses, and current focus.

    Blumberg’s path to venture capital began unconventionally. Inspired by a desire to solve big problems, he initially pursued government and economics. However, three pivotal experiences steered him towards business: the thrill of entrepreneurship through a student-run distribution service, disillusionment with government bureaucracy, and a thesis on African-Israeli relations which highlighted the enduring power of economic interests over political rhetoric.

    Early Investments and the Rise of Startup Nation

    Blumberg’s investing career began at T. Rowe Price, where he analyzed companies poised for IPOs.  His first investment was in Scitex, a significant step as it marked T. Rowe Price’s first investment in a publicly traded Israeli company.  He challenged the prevailing view of Israel as a risky, socialist country, arguing that these factors were already reflected in stock valuations. This insight led to further investments in the burgeoning Israeli tech scene. 

    Blumberg highlighted the importance of government deregulation in fostering Israel’s tech boom, drawing parallels with India’s economic liberalization in the 1990s.  He recalled his involvement with Yozma, a government program designed to attract foreign venture capital to Israel. While acknowledging Yozma’s role in promoting collaboration between international and Israeli investors, he emphasized that the government’s primary contribution was simply “getting out of the way” of entrepreneurs.

    From Family Offices to Bloomberg Capital

    After T. Rowe Price, Blumberg worked at Claridge, a family office in Montreal, where he gained valuable experience navigating the different investment criteria of family offices.  He then founded Bloomberg Capital, initially operating as a “virtual venture catalyst” connecting family offices with promising deals.  This evolved into a successful venture capital fund, now on its sixth iteration, with over 65 companies in its portfolio.  The firm employs a two-pronged strategy: early-stage investments (pre-seed to Series A) and growth investments (late Series A to early Series B).

    The Enduring Power of Teams

    Blumberg stressed the paramount importance of talented teams, especially in pre-seed investments.  He recounted his early investment in Nutanix, where he recognized the exceptional technical abilities of the founding team, which eventually led to a highly successful IPO.  He underscored the importance of strong leadership, citing the example of Checkpoint Software, another successful investment with a founding team possessing diverse skills. He further emphasized the firm’s unique approach to supporting its investments through its CIO Innovation Council, providing valuable feedback and access to potential customers.

    Looking Ahead

    Bloomberg Capital’s current thesis revolves around data-intensive companies applying AI and machine learning to specific verticals.  Their portfolio includes companies like Vair-AI (AI for mining), Imogene (cancer detection), Joshua (insurance policy writing), and Telen (automated receipt inspection). 

    Beyond the fund, Blumberg’s personal investment strategy involves diversifying public stock holdings, real estate, and contrarian investments in oil and gas, driven by his belief in “energy humanitarianism.”  He cites Peter Thiel, Joe Lonsdale, Mark Andreessen, Ben Horowitz, and the team at Sequoia as investors he admires.

    He emphasizes the importance of continuous learning, adapting to changing technologies, and understanding the interplay of economics and policy.  His advice to young investors? “Always get your contracts in writing!”  This simple yet crucial step protects hard work and sets the stage for success.

    To hear the full conversation, tune into the Startup Project Podcast episode with David Blumberg.  Subscribe on Spotify, Apple Podcasts, and YouTube.

  • From Meeting Notes to Co-pilot Everywhere: A Product Manager’s Guide to Building Expansive AI Products

    From Meeting Notes to Co-pilot Everywhere: A Product Manager’s Guide to Building Expansive AI Products

    The era of basic AI is over. Product Managers, it’s time to level up. We’ve seen the demos, played with the chatbots, and scratched the surface of what AI can do. But the real game-changer is building AI that proactively assists, optimizes, and anticipates user needs across every aspect of their work. Want to know how to build that kind of next-gen AI product? Listen closely to David Shim, CEO of Read.ai. In a recent Startup Project interview, Shim laid out the roadmap, not just for better meeting summaries, but for a future where AI is a true “co-pilot for everyone, everywhere.” This isn’t just a vision; it’s a $50 million Series B-backed reality. Product Managers, the future of productivity is being built now – are you ready to lead the charge?

    Read.ai, initially known for its AI meeting summarizer, harbors a much grander vision: to be a “co-pilot for everyone, everywhere.” This ambition, backed by a recent $50 million Series B raise, isn’t just about better meeting notes; it’s about fundamentally rethinking how AI can augment human productivity across all facets of work and life. For product managers eager to build truly impactful AI products, Shim’s journey and insights are invaluable.

    Start with the Problem, Not Just the Technology:

    Shim’s story of Read.ai’s inception is a powerful reminder for product managers. It didn’t begin with a fascination with large language models (LLMs) or the latest AI breakthroughs. It started with a personal pain point: the agonizing realization of being stuck in unproductive meetings. “Within two or three minutes of a call, you know if you should be there or not… but now I turned off my camera. I cannot leave this meeting,” Shim recounts.

    This relatable frustration became the seed for Read.ai. For product managers, this underscores a crucial principle: innovation begins with identifying a genuine problem. Don’t get swept away by the hype of new technologies. Instead, deeply understand user needs, frustrations, and inefficiencies. What are the “meetings” – metaphorical or literal – where your users are feeling stuck and unproductive?

    Unlocking Unconventional Data for Deeper Insights:

    Most AI products today heavily leverage text data. Read.ai, however, took a different path, recognizing the untapped potential of video and metadata. Shim’s “aha!” moment came from observing reflections in someone’s glasses during a virtual meeting, sparking the idea to analyze video for sentiment and engagement.

    This highlights a critical lesson for product managers: look beyond the obvious data sources. While text transcripts are valuable, they are just one layer of the story. Consider the rich data exhaust often overlooked – video cues, metadata like speaking speed, interruption patterns, response times to emails and messages. As Shim points out, “large language models don’t pick up” on the crucial reactions and non-verbal cues that humans instinctively understand.

    By incorporating this “reaction layer,” Read.ai’s summaries became materially different and more human-centric, highlighting what truly resonated with participants based on their engagement, not just the words spoken. For product managers, this means thinking creatively about data. What unconventional data sources can you leverage to build richer, more insightful AI experiences?

    Hybrid Intelligence: Marrying Traditional and Modern AI:

    Read.ai’s architecture is not solely reliant on LLMs. In fact, Shim reveals that “90% of our processing was our own proprietary models” last month. They strategically use LLMs for the “last mile” – for generating readable sentences and paragraphs – after their proprietary NLP and computer vision models have already done the heavy lifting of topic identification, sentiment analysis, and metadata extraction.

    This hybrid approach is a powerful strategy for product managers. It emphasizes the importance of building core intellectual property rather than solely relying on wrapping existing foundation models. While LLMs are powerful tools, defensibility often lies in unique data processing, specialized models for specific tasks, and innovative feature combinations.

    Product-Led Growth and Horizontal Market Vision:

    Read.ai’s explosive growth, adding “25,000 to 30,000 net new users every single day without spending a dollar on media,” is a testament to the power of product-led growth (PLG). This PLG engine is fueled by the inherently multiplayer nature of meetings. When one person uses Read.ai in a meeting, everyone present experiences its value, organically driving adoption.

    Furthermore, Read.ai consciously chose a horizontal market approach, resisting the pressure to niche down initially. Shim’s belief that “this is more mainstream… from an engineer at Google leads it to a teacher to an auto mechanic” proved prescient. Their user base spans diverse industries and geographies, highlighting the broad applicability of their co-pilot vision.

    For product managers, this demonstrates the power of designing for virality and considering broad market appeal, especially when building truly transformative products. Sometimes, focusing too narrowly early on can limit your potential impact.

    The Co-pilot Everywhere Vision and the Future of Optimization:

    Read.ai’s evolution from meeting notes to a “co-pilot everywhere” reflects a profound shift in AI’s role in productivity. It’s not just about generating content; it’s about optimization, action, and seamless integration into existing workflows. Shim envisions a future where Read.ai “pushes” insights to tools like Jira, Confluence, Notion, and Salesforce, and also “pulls” data from various sources to provide a unified, intelligent work assistant.

    This vision aligns with the emerging trend of AI agents. However, Shim emphasizes that the real power lies in practical integrations and seamless data flow between different work platforms, rather than just standalone agents. “You want your JIRA to talk with your Notion, to talk with your Microsoft, to talk with your Google, and talk with your Zoom,” he explains.

    For product managers, this means thinking beyond single-feature AI products. The next wave of innovation will be in building interconnected, optimized AI systems that proactively assist users across their entire workflow. It’s about moving from “draft AI” – generating content – to “optimization AI” – driving action and improving outcomes.

    Key Takeaways for Product Managers Building Next-Gen AI Products:

    • Focus on Real Problems: Start with genuine user pain points, not just technological possibilities.
    • Explore Unconventional Data: Look beyond text for richer, more nuanced insights.
    • Embrace Hybrid AI Architectures: Combine proprietary models with LLMs for defensibility and specialization.
    • Design for Product-Led Growth: Leverage inherent network effects and broad market appeal.
    • Vision Beyond Content Generation: Aim for optimization, action, and seamless integration into workflows.
    • Prioritize Value over Hype: Build solutions that deliver tangible ROI and improve user lives.
    • Iterate and Adapt: Constantly learn from user feedback and market dynamics to evolve your product.

    David Shim and Read.ai’s journey offer a compelling blueprint for product managers aiming to build the next generation of AI products. By focusing on genuine user needs, leveraging unconventional data, and envisioning a future of optimized, interconnected AI, product leaders can unlock the true potential of AI to transform the way we work and live.


    Nataraj is a Senior Product Manager at Microsoft Azure and the Author at Startup Project, featuring insights about building the next generation of enterprise technology products & businesses.


    Listen to the latest insights from leaders building the next generation products on Spotify, Apple, Substack and YouTube.

  • Modern Data Stack: Practical Strategies for Enterprise Product Management Leaders

    Modern Data Stack: Practical Strategies for Enterprise Product Management Leaders

    In the rapidly evolving landscape of cloud technology, it’s crucial for product management professionals to stay ahead of the curve, understanding not just the latest trends but also the foundational principles that shape the future of AI-driven products. In a recent episode of the podcast, Nataraj, host of the show, engaged in a fascinating conversation with Molham Aref, CEO of Relational AI, offering a treasure trove of insights for product leaders navigating the complexities of the modern data stack and enterprise AI.

    Molham, a seasoned veteran with over 30 years of experience in machine learning and AI, shared his journey from early computer vision projects at AT&T to leading Relational AI, a company revolutionizing how enterprises build intelligent applications. His career path, marked by stints at HNC Software (pioneers in neural networks), Brickstream (early computer vision in retail), PredictX, and Logicbox, provides a rich tapestry of lessons for product managers aiming to build impactful and scalable solutions.

    The Evolution of Enterprise AI and Product Management’s Role

    Molham’s journey underscores a critical evolution in enterprise AI. He began in an era where neural networks were nascent, focusing on specific problem domains like credit card fraud detection. “I started out working on computer vision problems at AT &T as a young engineer… and then I joined a company that was commercializing neural networks,” he recounted. This early phase highlighted the power of specialized AI models but also the challenges of broad applicability and integration within complex enterprise systems.

    For product managers, this historical context is vital. It reminds us that technological advancements are often iterative, building upon previous paradigms. Just as neural networks evolved, so too is the current wave of Gen AI. Understanding these historical cycles allows product teams to better anticipate future trends and avoid being swept away by hype cycles.

    A key product lesson Molham shared is the importance of speaking the customer’s language. Reflecting on his time at HNC, he noted, “When H &C started, they were just selling neural networks. And you go to a bank and say, buy my neural networks. And the bank goes, what’s a neural network and why would I buy it? And at some point, they realized, hey, that’s not really effective. Let’s go to a bank and tell them we’re solving a problem they have in their language.” This emphasizes a fundamental product principle: value proposition trumps feature fascination. Product managers must articulate how their AI solutions directly address business problems, focusing on tangible outcomes like cost reduction, revenue generation, or risk mitigation.

    Decoding the Modern Data Stack and Relational AI’s Solution

    Molham’s career narrative culminates in Relational AI, born from the frustration of building intelligent applications with fragmented technology stacks. “My whole career was spent working at companies focused on building one or two intelligent applications and in every situation it was a mess,” he confessed. He highlighted the pain of “gluing it all together” – the operational stack, BI stack, predictive, and prescriptive analytics – each with its own data management, programming model, and limitations.

    This pain point is highly relatable for product managers in the data-driven era. The “modern data stack,” as Molham explains, emerged as an “unbundling of data management.” While offering flexibility, it also introduces complexity. Relational AI addresses this head-on by offering a “co-processor” for data clouds like Snowflake, creating a “relational knowledge graph” that unifies graph analytics, rule-based reasoning, and predictive/prescriptive analytics.

    For product managers, Relational AI’s approach offers a valuable blueprint: focus on simplifying complexity. In a world of proliferating tools and technologies, solutions that streamline workflows and reduce integration headaches are immensely valuable. Molham’s platform choice – building on Snowflake – is also instructive. “For SQL, for data management, Snowflake is by far the leader,” he stated, emphasizing the importance of platform decisions in product strategy and go-to-market. Product managers must carefully consider platform ecosystems, choosing those that offer broad adoption and strong market traction.

    Gen AI in the Enterprise: Beyond the Hype and Towards Practical Application

    The conversation naturally shifted to Gen AI, the current buzzword in the AI space. Molham acknowledged its excitement but injected a dose of realism. “Gen.AI is super exciting. For the first time, we have models that can be trained in general, and then you have general applicability.” However, he cautioned against over-optimism in enterprise contexts. “In the enterprise, what people are finding out is having a model trained about the world doesn’t mean that it knows about your business.”

    This is a crucial insight for product managers exploring Gen AI applications. While Gen AI offers powerful capabilities, it’s not a silver bullet. Molham advocates for combining Gen AI with “more traditional AI technology, ontology, symbolic definitions of an enterprise, where you can talk about the core concepts of an enterprise.” This hybrid approach, leveraging knowledge graphs and structured data, is essential for building truly intelligent and context-aware enterprise applications.

    Product managers should heed this advice: Gen AI is a tool, not a strategy. Effective product strategies will involve thoughtfully integrating Gen AI with existing AI techniques and enterprise knowledge to deliver meaningful business value. Focus on use cases where Gen AI can augment, not replace, existing capabilities.

    B2B Sales and Founder Engagement: Lessons from the Trenches

    Molham shared invaluable insights on B2B sales, particularly for early-stage companies. He strongly believes in founder-led sales. “I really think it’s a mistake for the founders of the company not to take direct responsibility for sales,” he asserted. “You really have to go out there and do the really hard work of customer engagement and embarrassing yourself and doing all of those things to see what really works, what really resonates and where the pain is.”

    For product managers, especially in B2B tech, this underscores the importance of direct customer engagement. Product roadmaps should be informed by firsthand customer feedback, not just market research or analyst reports. Founder-led sales, as Molham suggests, provides invaluable raw data and customer intimacy that shapes product direction and market positioning.

    He also debunked the stereotype of the “slick talker” salesperson, emphasizing the value of “content rich folks who are also able to study and learn the problems of the prospect… teaching and tailoring.” This resonates deeply with product management – successful B2B sales, like successful product management, is about understanding and solving customer problems with expertise and tailored solutions.

    Mentorship, Hard Truths, and the Human Element

    Molham concluded with reflections on mentorship and the challenges of being a founder/CEO. He highlighted the immense value of mentors like Cam Lanier and Bob Muglia, emphasizing their integrity, long-term thinking, and win-win approach. He also candidly shared the difficulty of the founder journey. “It’s hard. It’s very difficult. This will probably be the last time I do this,” he joked, before quickly adding his passion for the mission and the quality of his team keeps him going.

    For product managers, these reflections are a reminder of the human element in building products and companies. Mentorship is crucial for navigating career challenges and gaining wisdom from experienced leaders. And the journey of product development, like entrepreneurship, is inherently challenging, requiring resilience, passion, and a strong team.

    Key Takeaways for Product Managers

    Molham Aref’s insights offer a powerful framework for product managers in the AI era:

    • Understand the historical context of AI: Technological evolution is iterative. Learn from the past to anticipate the future.

    • Focus on customer value proposition: Speak the customer’s language and solve real business problems.

    • Simplify complexity in the data stack: Prioritize solutions that streamline workflows and reduce integration burdens.

    • Gen AI is a tool, not a strategy: Integrate Gen AI thoughtfully with traditional AI and enterprise knowledge.

    • Engage directly with customers: Founder-led sales and direct customer feedback are invaluable for product direction.

    • Embrace mentorship and the human element: Learn from experienced leaders and build resilient, passionate teams.

    By internalizing these lessons, product management professionals can navigate the complexities of the modern data stack, harness the power of AI, and build truly impactful products for the enterprise of tomorrow.


    Nataraj is a Senior Product Manager at Microsoft Azure and the Author at Startup Project, featuring insights about building the next generation of enterprise technology products & businesses.


    Listen to the latest insights from leaders building the next generation products on Spotify, Apple, Substack and YouTube.

  • Why Data and Compute Are the Real Drivers of AI Breakthroughs

    Why Data and Compute Are the Real Drivers of AI Breakthroughs

    Artificial intelligence has captivated industries and imaginations alike, promising to reshape how we work, learn, and interact with technology. From self-driving cars to sophisticated language models, the advancements seem almost boundless. But beneath the surface of architectural innovations like Transformers, a more fundamental shift is driving this progress: the power of scale, fueled by vast datasets and immense computing resources.

    This insight comes from someone who has been at the forefront of this revolution. Jiquan Ngiam, a veteran of Google Brain and early leader at Coursera, and now founder of AI agent company, Lutra AI, offers a grounded perspective on the forces truly propelling AI forward. In a recent interview on the Startup Project podcast, he shared invaluable lessons gleaned from years of experience in the trenches of AI development. His key takeaway? While architectural ingenuity is crucial, it’s the often-underestimated elements of data and compute that are now the primary levers of progress.

    The “AlexNet Moment”: A Lesson in Scale

    To understand this perspective, it’s crucial to revisit a pivotal moment in deep learning history: AlexNet in 2012. As Jiquan explains, AlexNet wasn’t a radical architectural departure. Convolutional Neural Networks (CNNs), the foundation of AlexNet, had been around for decades. The breakthrough wasn’t a novel algorithm, but rather a bold scaling up of existing concepts.

    “AlexNet took convolutional neural networks… and they just scaled it up,” Jiquan recounts. “They made the filters bigger, added more layers, used a lot more data, trained it for longer, and just made it bigger.” This brute-force approach, coupled with innovations in utilizing GPUs for parallel processing, shattered previous performance benchmarks in image classification. This “AlexNet moment” underscored a crucial lesson: sometimes, raw scale trumps algorithmic complexity.

    This principle has echoed through subsequent AI advancements. Whether in image recognition or natural language processing, the pattern repeats. Architectures like ResNets and Transformers provided improvements, but their true power was unleashed when combined with exponentially larger datasets and ever-increasing computational power. The evolution of language models, from early Recurrent Neural Networks to the Transformer-based giants of today, vividly illustrates this point. The leap from GPT-2 to GPT-3 and beyond wasn’t solely about algorithmic tweaks; it was about orders of magnitude increases in model size, training data, and compute.

    The Data Bottleneck and the Future of AI

    However, this emphasis on scale also reveals a looming challenge: data scarcity. [Podcast Guest Name] raises a critical question about the sustainability of this exponential growth. “To scale it up more, you need not just more compute, you also need more data, and data is one that I think is going to be limiting us,” he cautions. The readily available datasets for language models, while vast, are finite and potentially becoming exhausted. Generating synthetic data offers a potential workaround, but its effectiveness remains limited by the quality of the models creating it.

    This data bottleneck is particularly acute in emerging AI applications like robotics. Consider the quest for general-purpose robots capable of performing everyday tasks. As [Podcast Guest Name] points out, “there is no data of me folding clothes… continuously of different types, of different kinds, in different households.” Replicating human dexterity and adaptability in robots requires massive amounts of real-world, task-specific data, which is currently lacking.

    This data challenge suggests a potential shift in AI development. While scaling up models will continue to be important, future breakthroughs may hinge on more efficient data utilization, innovative data generation techniques, and perhaps a renewed focus on algorithmic efficiency. [Podcast Guest Name] hints at this, noting, “incremental quality improvements are going to be harder moving forward… we might be at that curve where… the next incremental progress is harder and harder.”

    Agentic AI: Extending Intelligence Beyond Code

    Despite these challenges, [Podcast Guest Name] remains optimistic about the transformative potential of AI, particularly in the realm of “agentic AI.” His company, Lutra AI, is focused on building AI agents that can assist knowledge workers in their daily tasks, from research and data analysis to report generation and communication.

    The vision is to create AI that is “natively integrated into the apps you use,” capable of understanding context, manipulating data within those applications, and automating complex workflows. This goes beyond code generation, aiming to empower users to delegate a wide range of knowledge-based tasks to intelligent assistants.

    Navigating the Hype and Reality

    As AI continues its rapid evolution, it’s crucial to maintain a balanced perspective, separating hype from reality. [Podcast Guest Name] offers a pragmatic view on the ongoing debate about Artificial General Intelligence (AGI). He suggests shifting the focus from abstract definitions of AGI to the more tangible question of “what set of tasks… can we start to delegate to the computer now?”

    This practical approach emphasizes the immediate, real-world impact of AI. Whether it’s enhancing productivity through AI-powered coding tools like Cursor, or streamlining workflows with agentic AI assistants like Lutra AI, the benefits are already materializing. The future of AI, therefore, may be less about achieving a singular, human-level intelligence and more about continually expanding the scope of tasks that AI can effectively augment and automate, driven by the ongoing forces of data, compute, and human ingenuity. As we move forward, understanding and strategically leveraging these fundamental drivers will be key to unlocking AI’s full potential.


    Nataraj is a Senior Product Manager at Microsoft Azure and the Author at Startup Project, featuring insights about building the next generation of enterprise technology products & businesses.


    Listen to the latest insights from leaders building the next generation products on Spotify, Apple, Substack and YouTube.

  • EB1A Visa Hacks with ChatGPT: Can AI Help You Win?

    The rise of AI has sparked interest in its potential applications across various fields, including immigration law. ChatGPT, a powerful language model, is generating excitement among tech professionals seeking the coveted EB1A visa. While this technology can be a valuable tool, it’s essential to understand its limitations and use it strategically to maximize its potential.

    ChatGPT’s Advantages: Streamlining the Letter Writing Process

    One of the most significant challenges for EB1A applicants is crafting strong, persuasive letters of support from experts in their field. ChatGPT excels at generating text, offering a unique advantage for individuals struggling with writer’s block. By providing specific prompts, you can guide the AI to generate initial drafts of letters that highlight your achievements and contributions. This can be a significant time saver, allowing you to focus on refining and enhancing the content.

    Avoiding Common EB1A Mistakes: ChatGPT and the Art of Specificity

    However, it’s crucial to remember that ChatGPT is a tool, not a substitute for expert legal advice. While it can assist in generating text, it lacks the nuanced understanding of immigration law and case-specific complexities. The AI may inadvertently introduce errors or produce content that isn’t persuasive or legally compliant.

    For instance, it might use terms like “exceptional ability” instead of “extraordinary ability,” which can be detrimental to your case. This highlights the need for a seasoned immigration attorney who can review the AI-generated content, address potential issues, and ensure it aligns with the specific requirements of your case.

    Beyond Text Generation: A Holistic Approach to EB1A

    Ultimately, winning an EB1A petition involves building a compelling narrative showcasing your extraordinary ability and contributions. ChatGPT can be helpful in generating text, but it cannot replace the need for a comprehensive strategy. This includes gathering and organizing evidence, crafting compelling arguments, and understanding the specific requirements for your field.

    Remember, a successful EB1A case goes beyond simply producing letters. You need to demonstrate a sustained record of accomplishments that are recognized by your peers and industry leaders. This involves showcasing your research, publications, awards, and other evidence of your impact.

    The Role of an Immigration Attorney: A Guiding Hand

    While ChatGPT can be a valuable tool, it’s essential to have a skilled and experienced immigration attorney on your side. They provide the legal expertise, strategic guidance, and personalized support you need to navigate the complex EB1A process. They can review your AI-generated materials, offer feedback, and ensure your case is presented effectively to USCIS.

    Key Takeaways

    • ChatGPT can be a valuable tool for generating letters of support and refining content for your EB1A petition.
    • It’s crucial to understand the AI’s limitations and not rely on it solely for legal guidance.
    • An experienced immigration attorney is essential to review your materials, ensure accuracy and persuasiveness, and build a comprehensive case strategy.

    TL;DR

    ChatGPT can be a helpful tool for generating text, but it’s not a magic bullet for EB1A success. Consult with an experienced attorney for expert guidance and strategic support.

    Stay updated:

  • Want to Be a VC? Here’s What You Need to Know (Beyond the Hype)

    Venture capital (VC) is often portrayed as the glamorous side of finance, attracting ambitious individuals seeking high-stakes rewards. But what’s the reality behind the hype? Today, we’ll dive into the world of venture capital, exploring its core functions, the daily grind of an investor, compensation, and whether it’s the right fit for you.

    What do VC firms do?

    VC investors act as partners to entrepreneurs, helping them build and scale businesses from the ground up. They provide three key elements:

    • Funding: VC firms inject massive amounts of capital, enabling companies to hire talent and expand their operations.
    • Insight: Experienced VC investors, who have seen countless companies rise and fall, offer invaluable advice and guidance to entrepreneurs, particularly during challenging times.
    • Network: Top VC firms boast extensive networks, opening doors for entrepreneurs in areas like recruiting top talent or securing crucial business partnerships.

    The Venture Capital Investment Landscape

    VC firms specialize in various stages of investment:

    • Pre-Seed/Seed: The earliest round of funding, where risk is high, and equity stakes are substantial.
    • Early Stage: Includes seed, Series A, and Series B rounds, with moderate risk and equity stakes.
    • Growth Stage: Consists of Series C to IPO, with lower risk and smaller equity stakes.
    • Stage Agnostic: These firms invest across all stages, from seed to IPO.

    The Day-to-Day Life of a VC Investor

    No two days are the same if you are working in venture capital.

    Key responsibilities include:

    • Sourcing new investments: This involves extensive outreach, attending conferences, and hosting events to attract promising founders and companies.
    • Conducting due diligence: This includes analyzing pitch decks, evaluating valuations, conducting market research, and crafting investment memos.
    • Supporting portfolio companies: This entails joining boards, connecting companies with potential customers and suppliers, and assisting with recruitment.
    • Fundraising: As you climb the ranks, you become responsible for managing investors, providing regular portfolio updates, and securing funding for new ventures.

    The type of day you will have in VC depends if you are new junior investor who has no stake in the fund or if you are the General Partner of the fund.

    The Hustle and the High Stakes

    VC investors typically work long hours, often 60-80 per week, with the potential for even longer hours during fundraising rounds. The work is highly self-directed, requiring initiative, resourcefulness, and creative problem-solving.

    While the hours can be grueling, the high stakes and potential for significant returns create a high-pressure environment. Every investment carries the risk of going to zero, making each hour a critical decision-making process.

    The VC Compensation Structure

    Compensation typically comprises three components: salary, bonus, and carry. Carry represents a percentage of the fund’s annual profits, varying greatly between firms and years.

    • Analysts: Starting salaries range from $60,000 to $100,000, with little to no carry.
    • Associates: Earn $150,000 to $200,000, often with minimal carry.
    • Senior Associates: Earn $200,000 to $250,000 and begin receiving substantial carry.
    • VPs and Partners: Carry becomes a significant portion of compensation, motivating investors to make profitable investments.

    Is Venture Capital Right for You?

    Pros:

    • Flexibility and autonomy: You’ll have the freedom to work on projects that align with your interests and expertise.
    • Exposure to cutting-edge technologies: You’ll be at the forefront of innovation, engaging with brilliant entrepreneurs and industry leaders.
    • Relationship-driven environment: Venture capital thrives on building connections, allowing both introverts and extroverts to succeed.

    Cons:

    • Unstructured work environment: If you thrive on a predictable schedule, venture capital may not be the best fit.
    • Long-term commitment: You may have to wait 7-10 years to see returns on your investments.
    • Intense competition: The VC space is highly competitive, requiring a strong reputation or a position at a top firm to stand out.
    • Lack of expertise: One common problem young analysts who join VC firms face is they don’t have any relevant work experience and will not be able to develop expertise in any field.

    Breaking into the VC Industry

    While a background in investment banking or consulting is often considered an ideal stepping stone, many successful VC investors come from diverse backgrounds in technology, business, and other relevant fields. Most VCs are ex-founders who went through the journey of running a VC backed company.

    Ready to Dive Deeper?

    For a more in-depth look at the venture capital world, check out our podcast “Startup Project Podcast” where we interview founders, VCs and operators.

    Subscribe to Startup Project YouTube channel for great conversations.
  • How Mighty Capital Defies the Odds of Technology Investing by Being Product-First

    How Mighty Capital Defies the Odds of Technology Investing by Being Product-First

    In a world where venture capital success is often described as a game of chance, with a hit rate of one in 20 or even one in 30, Mighty Capital stands out. Founded by entrepreneur, product leader, & author SC Moatti, Mighty Capital has carved a unique path in the industry, focusing on a “product-first” approach and achieving a remarkable hit rate of one in five.

    On a recent episode of the Startup Project podcast, SC Moatti shared insights into her journey with host Nataraj SIndam, revealing the secrets behind her unconventional success.

    From Product Guru to VC Pioneer

    SC Moatti has a diverse background, ranging from a successful career in product management at companies like Meta and Nokia to founding her own companies and angel investing. Her passion for product excellence led her to establish “Products That Count,” a non-profit organization dedicated to fostering knowledge and best practices within the product management community. This platform has served as a valuable resource for Mighty Capital, providing valuable insights into emerging trends and identifying potential investments.

    The Product Mindset in Venture Capital

    Mighty Capital distinguishes itself by applying a product mindset to venture capital. This means looking beyond traditional metrics and focusing on the core elements of a successful product:

    • Team: They meticulously evaluate the team’s performance, board composition, and the CEO’s ability to be coached.
    • Traction: They seek companies with demonstrable revenue growth, analyzing revenue composition and customer references.
    • Market: They analyze the market, roadmap, and the company’s potential for growth.
    • Terms: They prioritize fair terms that foster a long-term partnership with entrepreneurs.

    This approach, combined with her deep understanding of the product landscape, and the unique network of Products That Count, has enabled Mighty Capital to invest in companies like Amplitude, Grok, Airbnb, and Digital Ocean, demonstrating a knack for identifying winners before they become mainstream.

    Beyond the Numbers: Building a Better Board

    SC Moatti also highlights the importance of board governance in early-stage companies. She teaches a course on the subject at Stanford’s Executive Program, emphasizing the critical role of board members in maximizing shareholder value through effective use of financial and human resources. She believes that effective board engagement transcends the traditional power dynamics, focusing instead on collaborative partnerships with founders.

    The Future of Product Management and AI

    SC Moatti believes that product management is a constantly evolving field, and emphasizes the need for ongoing learning and adaptation. She encourages aspiring product managers to engage with the product community through platforms like “Products That Count,” to keep up with the latest trends and challenges.

    When it comes to the future of AI, SC Moatti cautions against focusing solely on small, quick-win problems. She advocates for tackling larger, more complex issues, such as drug discovery, self-driving cars, and loneliness, areas where AI has the potential to revolutionize industries and improve lives.

    Key Takeaways for Startups

    SC Moatti’s insights offer valuable lessons for aspiring entrepreneurs:

    • Think big, start small: Focus on solving big problems but take a smaller, incremental approach to execution.
    • Invest in product excellence: Prioritize product quality and user experience as foundational elements of success.
    • Embrace lifelong learning: Continuously expand your knowledge and skills in the ever-evolving tech landscape.
    • Seek out mentors: Connect with peers and industry leaders who can offer guidance and support.

    Mighty Capital’s success serves as a testament to the power of applying a product mindset to the world of venture capital. By prioritizing a product-first approach and building strong relationships with entrepreneurs, they are defying the odds and shaping a new era of VC innovation.


    Nataraj is a Senior Product Manager at Microsoft Azure and the Author at Startup Project, featuring insights about building the next generation of enterprise technology products & businesses.


    Listen to the latest insights from leaders building the next generation products on Spotify, Apple, Substack and YouTube.

  • How Scispot is redefining modern biotech’s data infrastructure

    How Scispot is redefining modern biotech’s data infrastructure

    Biotech is becoming one of the world’s single biggest generator of data, expected to reach 40 exabytes a year by 2025—outstripping even astronomy’s fabled data deluge. Yet as much as 80 percent of those bytes never make it into an analytics pipeline. Three bottlenecks explain the gap: (1) stubbornly paper-based processes, (2) binary or proprietary instrument file formats that general-purpose integration tools cannot parse, and (3) hand-offs between wet-lab scientists and dry-lab bioinformaticians that break data lineage.

    Verticalization 2.0: Solving for Domain-Specific Friction

    Enter Scispot, a Seattle-based start-up founded in 2021 by brothers Satya and Guru Singh, which positions itself not as an electronic lab notebook or a data warehouse, but as a middleware layer purpose-built for life-science R&D, quality and manufacturing. The strategic insight is subtle and powerful: horizontal cloud platforms already exist, but they optimize for structured, JSON-ready data. Biotech’s heterogeneity demands schema-on-read ingestion and ontology mapping that an AWS or Snowflake cannot supply out of the box.

    Scispot’s architecture borrows liberally from modern data stacks—an unstructured “lake-house” for raw instrument output, metadata extraction via embeddings, and API access to graph databases—but is wrapped in compliance scaffolding (SOC 2, HIPAA, FDA 21 CFR 11) that is prohibitively expensive for labs to build alone. The company is effectively productizing the cost of trust, a move that mirrors how Zipline built FDA-grade logistics in drones or how Databricks turned Apache Spark into audit-ready enterprise software.

    YC’s Real Dividend: Market Signal Discipline

    Although accepted to Y Combinator on the promise of a voice-activated lab assistant, Scispot pivoted within weeks when early interviews revealed that customers valued reliable data plumbing over conversational bells and whistles. This underscores a counter-intuitive lesson from YC alumni: the program’s most enduring value may not be its brand or cheque, but its insistence that founders divorce themselves from their first idea and marry themselves to user-observed pain.

    That discipline paid off. Scispot signed its first customer before writing a line of production code—a pattern consistent with what Harvard Business School’s Thomas Eisenmann calls “lean startup inside a vertical wedge.” By focusing on a tiny subset of users (labs already running AI-driven experiments) but solving 90 percent of their total workflow, the brothers accelerated to profitability in year one and maintained “default alive” status, insulating the firm from the 2024 venture slowdown.

    Why Profitability Matters More in Vertical SaaS

    Horizontal SaaS vendors can afford years of cash-burn while they chase winner-take-all network effects; vertical players rarely enjoy those economies of scale. Instead, their defensibility comes from domain expertise, proprietary integrations and regulatory moats. Profitability becomes a strategic asset: it signals staying power to conservative customers, funds the painstaking addition of each new instrument driver, and reduces dependence on boom-and-bust capital cycles.

    Scispot’s break-even footing has already shaped its product roadmap. Rather than racing to become an all-in-one “Microsoft for Bio” suite, the team is doubling down on an agent-based orchestration engine that lets instrument-specific agents talk to experiment-metadata agents under human supervision. The choice keeps R&D burn modest while reinforcing the middleware thesis: be everywhere, own little, connect all.

    Lessons for Operators and Investors

    1. Treat Unstructured Data as a Feature, Not a Bug. Companies that design for messiness—using vector search, ontologies and schema-on-read—capture value where horizontal rivals stall.
    2. Compliance Is a Product Line. SOC 2 and HIPAA are not check-box exercises; they are sources of price premium and switching cost when woven into the core architecture.
    3. Fundamentals Trump Funding. YC’s internal analysis, echoed by Sizeport’s trajectory, shows no linear correlation between dollars raised and long-term success. Default-alive vertical SaaS firms can wait for strategic rather than survival capital.
    4. Remote Trust-Building Is a Competency. Sizeport’s COVID-era cohort had to master virtual selling and onboarding. As biotech globalizes, that skill set scales better than another flight to Cambridge, MA.

    What Comes Next

    Sizeport’s stated near-term goal is to become the staging warehouse for every experimental data point a lab produces, integrating seamlessly with incumbent ELNs and LIMS. Over a five-year horizon, the company aims to enable customers to mint their own AI-ready knowledge graphs—effectively turning drug-discovery IP into a queryable asset class. If successful, the platform could evolve into the “Databricks of Biotech,” but without owning the data outright.


    Nataraj is a Senior Product Manager at Microsoft Azure and the Author at Startup Project, featuring insights about building the next generation of enterprise technology products & businesses.


    Listen to the latest insights from leaders building the next generation products on Spotify, Apple, Substack and YouTube.

  • What Does Klarna’s Case Study Tell Us About Building Products Using Generative Ai?

    What Does Klarna’s Case Study Tell Us About Building Products Using Generative Ai?

    There are lots of ways generative AI is being used by companies. In case of Klarna they launched an AI customer support assistant that is available 24*7 and supports 30+ languages. They launched the assistant using Open AI as their LLM provider. If we have to guess they have finetuned it using their own customer support interaction data set that they have collected over the years. We often underestimate it when we say a customer support AI assistant, but if you look at what it takes to create a support team that works 24*7 and supports 30+ languages, it requires a lot of resources for any company.

    What features does the Klarna AI assistant support?

    • The assistant is designed to handle wide range of questions related to refunds, returns, payment-related issues, cancellations, disputes, and invoice inaccuracies, ensuring swift and effective solutions.
    • Like any tech feature, its available 24*7 across all geographies.
    • For customers who want to know what they can afford and not afford, it also provides info related to purchase power, spending limits and the reasons why it exists.
    • All the info of a customer account is with the assistant to help any questions related to it.

    Is the assistant effective?

    Its cool to say that we have used gen AI and see the stock pop up, but the real question is, does the feature deliver value to the company. And from the numbers it looks like it is beating expectations. Here are some of the numbers.

    • Two-thirds of all customer service chats are handled by the assistant, which is an impressive 2.3M conversations.
    • Customer satisfaction same as human conversations.
    • 25% drop in repeat inquiries
    • Estimated to drive $40M in profit improvements to the company.

    Multilingual coverage has also improved the experience for immigrant and expatriate shoppers, who can now converse in their native language without waiting for specialized staff.

    How it is built?

    Klarna combines OpenAI’s GPT-4-class model with a proprietary retrieval-augmented layer that injects fresh order data and policy documents. Fine-tuning on years of support transcripts helps the bot mirror Klarna’s brand voice and comply with local regulations. A confidence-scoring system automatically hands uncertain cases to human agents, keeping quality assurance intact.

    Should you build one too?

    If your firm has a sizable archive of labeled support data, start by clustering tickets to find the top 20 intents—typically 80 percent of volume. Spin up a retrieval-augmented bot in a sandbox, keep humans on standby for low-confidence cases, and track three metrics: time-to-resolution, repeat-contact rate, and CSAT. Double-digit gains in two of those usually justify rollout.

    A rollout this sweeping touches more than technology. Klarna ran “shadow agent” weeks where the AI answered in parallel with humans so supervisors could audit every response. Once accuracy held above 90 percent, traffic was gradually shifted to the bot. Daily dashboards track hallucination rate, hand-off frequency, and compliance flags that regulators can audit after the fact.

    What are the takeaways from Klarna’s gen AI assistant?

    1. Its clear that generative AI is being adopted and creating value for companies at all stages.
    2. Customer support will get more effective and cheaper.
    3. We will see a lot of growth stage startups follow Klarna to boost their profitability and reduce expenses.
    4. Open AI’s board drama didn’t effect its trust among companies to use them for production scale deployments.
    5. Expect more companies to announce improvements because of using gen AI technology in fields outside customer support.
    6. If you are a company with large data set related to customer support, you should explore creating an AI assistant for support interactions.

    Follow me on TwitterLinkedIn for latest updates on 100 days of AI. If you are in tech you might be interested in joining my community of tech professionals here.