Skip to content

Five Generative AI Trends Product Teams Must Watch

Generative artificial intelligence is advancing so quickly that Nvidia—the maker of the H-series GPUs inside most training clusters—briefly overtook Amazon’s market cap this spring. New models, plug-ins, and AI-native start-ups appear every week, making it hard to separate hype from signal. The five forces below are the ones most likely to reshape product roadmaps over the next 12–18 months.

1. Data Turns Into Revenue

“Data is the new oil” finally has a dollar sign. In February, Reddit reportedly licensed 18 years of forum conversations to an unnamed AI company for USD 60 million per year. X (Twitter) and Stack Overflow are said to be exploring similar arrangements, and publishers such as The Financial Times have already inked deals with OpenAI. If your company owns a large, well-labeled corpus—support tickets, sensor logs, maintenance manuals—you may be sitting on a salable asset.
Beyond cash, licensing can fund better infrastructure and raise your brand’s profile inside the AI ecosystem, but it also creates legal and reputational risk around privacy and consent.
PM tip: Inventory proprietary datasets, rank them by sensitivity, and involve counsel early. Decide whether the bigger upside is outside revenue or internal fine-tuning that sharpens your own product.

2. The Rise of Open-Source Models

Closed weights from OpenAI still dominate mindshare, yet open alternatives are sprinting forward. Meta’s Llama 3, Mistral’s Mixtral-8x7B, and Databricks’ DBRX hit strong benchmarks with weights anyone can download and inspect. Freedom to run the model anywhere means you control latency, can fine-tune without sending data to a third party, and avoid per-token API fees.
Open licenses also spark a plug-and-play ecosystem: retrieval frameworks, vector databases, and guardrail libraries often ship adapters for Llama or Mixtral first.
PM tip: Budget a benchmarking sprint. Even if an open model trails GPT-4 by a few points, cost savings and privacy control can be decisive in regulated markets.

3. Small Language Models (SLMs) Move to the Edge

LLMs wowed the public, but their compute appetite makes them expensive and sometimes slow. Small Language Models—roughly 7 billion parameters or fewer—run on phones and even micro-servers. Microsoft’s Phi-3-mini handles homework locally; Google’s T5-small translates offline; Apple’s Ajax prototypes hint at on-device personal assistants.
Why it matters: on-device inference slashes latency to milliseconds, works without a network, and keeps sensitive data on hardware you control. Cloud bills drop because generation shifts from usage fees to a one-time silicon cost.
PM tip: Pinpoint user journeys where instant response, offline mode, or strict privacy are mandatory—field service, hospitals, commuter apps. An SLM may hit quality targets while cutting 80 percent of your cloud spend.

4. Securing the Prompt Supply Chain

The more people rely on generative UX, the more attackers probe it. Jailbreaks can elicit disallowed content, prompt injections hide malicious instructions in PDFs, and data-poisoning campaigns seed falsehoods for crawlers to ingest. Start-ups such as Lakera and PromptGuard now sell “LLM firewalls,” while NIST’s red-team playbook and the OWASP Top 10 for LLMs are becoming default checklists.
PM tip: Version prompts in Git, sandbox untrusted inputs, and schedule regular red-team drills. Treat generative systems like any other executable code exposed to the public internet.

5. Regulation and Investment Chess

Governments are shifting from hearings to rules. The EU’s AI Act introduces graded risk tiers and mandatory transparency; the U.S. Executive Order on Safe, Secure, and Trustworthy AI calls for watermarking and safety evaluations; China’s Generative AI measures require security reviews before public release. Regulators are also scrutinizing equity deals—Microsoft-OpenAI, Amazon/Google-Anthropic—that let incumbents lock in model access without outright acquisitions. Forced divestitures or tighter disclosure rules could arrive with little notice.
PM tip: Track the provenance of every model you ship, log user interactions for audits, and design a modular stack so a supplier swap won’t break the customer experience.

The Bottom Line

Data licensing, open models, edge inference, security hardening, and policy oversight are converging fast. Product managers who watch these trends now can choose the right model, negotiate favorable licenses, and build safeguards before regulators—or attackers—demand them. Staying informed keeps teams focused on shipping durable customer value instead of scrambling after the next flash-in-the-pan release.


Nataraj is a Senior Product Manager at Microsoft Azure and the Author at Startup Project, featuring insights about building the next generation of enterprise technology products & businesses.


Listen to the latest insights from leaders building the next generation products on Spotify, Apple, Substack and YouTube.