top of page
Color logo - no background.png
Red Gradient Background

Beyond Models: The Power of Context

  • Writer: VinVentures
    VinVentures
  • Nov 2
  • 6 min read

Over the past year, the center of gravity in AI has steadily evolved. As a16z observed in “Context Is King” (2025), the next wave of defensibility in AI will not be defined by who builds the largest or fastest models, but by how intelligently these systems are applied within high-context human environments, the places where trust, workflow design, and domain understanding matter just as much as technical performance.


As foundation models become more accessible, the true opportunity now lies in how effectively companies translate capability into real-world value. Success depends not only on what an AI system can do, but on how naturally it fits into the rhythm of existing processes, how well it supports decision-making, safeguards data integrity, and augments human judgment. 


Today, we’ll explore why context has become the new moat , how it’s redefining what makes an AI company defensible, how it’s changing the profile of founders leading this generation of startups, and why those who build with context at the core will ultimately outlast those who only build with code 

 

1. The Founder Inversion 


In previous software waves, startups were usually founded by domain experts. A doctor built software to digitize patient workflows. A logistics manager turned operational pain points into SaaS for fleet tracking. Domain insight came first; engineering came later. 


AI has flipped that model. The new generation of founders starts from technical depth, not industry experience. They’re engineers, researchers, or data scientists who understand how to prompt, fine-tune, and orchestrate large language models. Their expertise lies in the toolset, not the domain. 


This inversion has unlocked speed. Technical founders can prototype in days and iterate in public. But speed introduces a new risk: when you start with technology instead of context, it’s easy to build something impressive that never quite fits the workflow. 

 

2. From Differentiation to Defensibility 


AI makes differentiation easy , and defensibility hard. With open-source models and public APIs, the cost of building has collapsed. But so has the cost of copying. A dozen teams can now ship nearly identical features within weeks. 


That’s why the most resilient companies are shifting their focus from speed to stickiness. They know that long-term advantage doesn’t come from better prompts or faster releases — it comes from contextual depth. 


Defensibility still rests on three timeless principles: 

  • Owning the workflow end-to-end. 

  • Embedding deeply into customer systems. 

  • Earning trust through accuracy and reliability. 


And in AI, each of those depends on context.  


Harvey  Embedding: Legal Reasoning into AI Systems 


In 2022, Gabe Pereyra, a former DeepMind researcher, and Winston Weinberg, a litigation associate at O’Melveny & Myers, founded Harvey, an AI copilot for legal professionals. Pereyra brought technical mastery in reinforcement learning and reasoning systems; Weinberg contributed a practitioner’s sense of how lawyers argue, document, and defend their decisions. 


Harvey’s core system builds on large language models fine-tuned for legal reasoning. It layers those LLMs with structured retrieval pipelines that pull from precedent databases, templates, and firm-specific repositories, grounding every output in verifiable sources. Instead of generating text freely, the system constrains its reasoning through citation and document linking, ensuring interpretability, auditability, and compliance with strict confidentiality standards. 


That design mirrors the discipline of the legal profession itself. Harvey doesn’t simply “write like a lawyer”; it reasonslike one, weighing precision over speed and justification over novelty. It integrates directly into firms’ document management systems, aligning with internal processes and hierarchy of review. This fidelity to real-world legal practice — not just technical performance — helped Harvey win clients such as Allen & Overy and PwC Legal, and earn investment from OpenAI’s Startup Fund. Harvey’s defensibility lies in trust by design: its architecture encodes the same logic of evidence and accountability that governs the legal field. 

 

Runway: Turning Generative AI into Production Infrastructure 


At NYU’s Interactive Telecommunications Program, Cristóbal Valenzuela,

Anastasis Germanidis, and Alejandro Matamala began experimenting with machine learning as a new medium for creativity. They weren’t filmmakers, they were engineers and artists asking a simple question: Could AI become a creative partner rather than just a tool? 


Their answer became Runway, launched in 2018 as an open playground for generative models in image and video creation. Early adoption came from digital artists, but professionals in film and design quickly exposed its limits, inconsistent frame quality, lack of version control, and weak integration with existing production software. 


Runway evolved fast. It rebuilt its core on a proprietary multimodal engine that combines diffusion models for text-to-video generation with temporal coherence systems to maintain frame-by-frame consistency. It integrated seamlessly with Adobe Premiere Pro, After Effects, and Unreal Engine, embedding AI capabilities directly inside professional workflows. 


That shift, from model experimentation to production-grade infrastructure, redefined Runway’s competitive edge. Its architecture now optimizes for real-world constraints: color fidelity, export stability, and latency. The company’s innovation wasn’t merely algorithmic; it was operational. Runway bridged the gap between cutting-edge generative models and the exacting standards of commercial production, and in doing so, became part of the creative stack itself. 

 

Adept: Teaching Machines to Understand Human Workflows 


Founded in 2022 by David Luan and Niki Parmar, both alumni of OpenAI, Google Brain, and DeepMind, Adept set out to answer a different question: Can AI learn to use software the way people do? 


Rather than training domain-specific systems, Adept builds transformer-based agents that interact with existing applications, Salesforce, Google Sheets, Chrome, through their actual interfaces. By combining text input with UI structure, cursor trajectories, and clickstream data, Adept’s models learn to perform tasks end-to-end, creating what the company calls a universal action model. 


These agents don’t predict text; they predict actions in context. They understand menus, shortcuts, and the logic behind user corrections. Over time, this forms a “workflow intelligence layer”, a behavioral dataset that maps how real people navigate digital work. 


Adept’s defensibility doesn’t come from model scale but from data exclusivity. While most foundation models are trained on static text, Adept’s systems learn from proprietary, high-resolution records of human task behavior, the kind of contextual data that cannot be scraped or replicated. 


In effect, Adept isn’t building software to replace humans; it’s training AI to use the tools humans already rely on. Its moat comes from this unique alignment between model learning and human intent, a form of context that compounds over time. 


3. "Context is the king"


Context Turns Capability Into Usefulness 


AI models are powerful at generating answers, but they’re still poor at understanding situations. A model can summarize a document or analyze data, but it doesn’t know the cultural, legal, or operational context in which those actions take place. 


In the real world, users don’t want creative output, they want reliable outcomes that respect industry standards, maintain data integrity, and follow decision-making rules. 

Context provides that missing bridge. It allows AI systems to operate within the “logic” of a specific domain — whether that means legal reasoning, financial compliance, manufacturing quality control, or clinical workflows. 


Without context, AI remains impressive but impractical. With context, it becomes a trusted assistant, a system that augments human judgment rather than complicating it. 

 

Context Builds Trust and Adoption 


Trust is the foundation of any sustainable AI product.  Users don’t trust AI because it’s intelligent; they trust it because it behaves consistently within their world, using familiar language, adhering to policy, and respecting boundaries. 


That’s why contextual grounding, relying on verified data sources, domain-specific logic, and workflow integration, is so powerful. When an AI behaves predictably and fits naturally into existing processes, users stop treating it as an experiment and start depending on it as infrastructure. 


This is exactly how companies like Harvey, Runway, and Adept have scaled. Their advantage isn’t just technical; it’s relational. They’ve earned permission to operate in high-stakes environments, law firms, production studios, and enterprises, where accuracy, continuity, and compliance are not optional. Trust, once earned, becomes the strongest form of retention. 

 

Context Creates a Data Flywheel 


Every time a user interacts with an AI system that’s deeply embedded in their workflow, it generates valuable behavioral data: how people phrase requests, make corrections, and handle exceptions. 


That feedback compounds over time. 

  • Better context produces better outputs. 

  • Better outputs drive higher usage. 

  • Higher usage creates richer data for fine-tuning. 


This contextual flywheel becomes a self-reinforcing loop, a proprietary data asset that no competitor can replicate. It’s the foundation of defensibility in the era of open models. 

Companies that invest early in domain integration create moats that grow stronger with every user interaction. They may be using the same underlying LLMs as everyone else, but their data, trust, and workflow depth are uniquely their own. 

 

4. Implications for Startups 


This shift reshapes how AI companies should think about product strategy and long-term defensibility. 

  • Move from product demos to workflow depth. The goal isn’t to show impressive output, it’s to solve real operational pain points inside the customer’s system of record. 

  • Prioritize embedding over expansion. The most resilient startups dominate one vertical before expanding to others. Context travels horizontally only after it’s mastered vertically. 

  • Build data ownership through usage, not scraping. Proprietary value comes from how customers use your product, not from what you scrape from the internet. 

  • Treat trust as an asset. Every accurate, explainable, and compliant output compounds your credibility, and credibility compounds retention. 


For founders, this means pairing technical mastery with deep customer intimacy. For investors, it means evaluating startups not just on model innovation, but on their ability to embed AI into the hard edges of real business systems, where contracts are signed, decisions are made, and accountability lives. 



References:


Haber, D. (2025, August 18). Context is King. Andreessen Horowitz. https://a16z.com/context-is-king/


Martin, I. (2025, October 29). Legal AI startup Harvey raises $150 million at $8 billion valuation. Forbes. https://www.forbes.com/sites/iainmartin/2025/10/29/legal-ai-startup-harvey-raises-150-million-at-8-billion-valuation/


Vyshyvaniuk, K. (2025, August 13). The inspiring story: Cristóbal Valenzuela, CEO at Runway. KITRUM. https://kitrum.com/blog/the-inspiring-story-cristobal-valenzuela-ceo-at-runway/


Wiggers, K. (2022, April 26). Adept aims to build AI that can automate any software process. TechCrunch. https://techcrunch.com/2022/04/26/2304039/


 

 
 
bottom of page