Skip to main content

The Context Graph Is Missing a Layer

Foundation Capital calls context graphs a trillion-dollar opportunity. But the thesis has a blind spot: it models what organizations know, not what individual users believe, understand, or need. The epistemic layer is the missing piece that turns context infrastructure into personalization infrastructure.

Robert Ta's Self-Model
Robert Ta's Self-Model CEO & Co-Founder 847 beliefs
· · 12 min read

TL;DR

  • Foundation Capital’s December 2025 thesis correctly identifies context graphs as critical enterprise AI infrastructure, a “trillion-dollar opportunity” in decision-trace and workflow context
  • But the thesis has a blind spot: it treats context as purely organizational (decisions, workflows, data lineage) and ignores the user-facing layer entirely
  • The epistemic layer, structured models of what each user believes, knows, and needs, is what turns context infrastructure into personalization infrastructure
  • Players like TrustGraph, Zep/Graphiti, Cognee, and Graphlit are building the organizational context layer. Nobody is building the user layer on top of it.

Context graphs represent the most important infrastructure shift in enterprise AI since RAG. Foundation Capital’s December 2025 thesis frames them as a trillion-dollar opportunity, and that framing is correct. But the thesis contains a blind spot that limits its own promise: it models what organizations know without modeling what individual users within those organizations believe, understand, or need.

$0T+
Foundation Capital's estimated context graph opportunity
0+
context graph infrastructure companies funded in 2025
0
that model the epistemic state of individual users
0x
AI quality improvement when epistemic layer is added

The Foundation Capital Thesis: What It Gets Right

Foundation Capital’s context graph thesis deserves to be taken seriously. The core argument is that enterprise AI has a context problem, and that the solution is structured infrastructure for capturing decision traces, workflow context, and data lineage across the organization. This is not just another knowledge graph pitch. It is an argument that context itself is the product.

The thesis identifies several real infrastructure gaps. Enterprise AI systems operate without organizational memory. Decisions made in one system are invisible to agents operating in another. Workflow context evaporates between steps. Data lineage is tracked for compliance but not leveraged for intelligence. The result is AI that is technically capable but organizationally blind.

This diagnosis is accurate. The solution Foundation Capital envisions, a graph-based infrastructure layer that captures and connects these organizational context traces, is architecturally sound. Companies like Cognee and Graphlit are building exactly this kind of infrastructure: tools that make organizational context queryable, connectable, and actionable for AI systems. The investment thesis is strong because the problem is real and the timing is right.

Where the thesis goes wrong is not in what it includes. It is in what it omits.

The Blind Spot: Context Without a User

The Foundation Capital thesis treats context as organizational. Decisions, workflows, data lineage, entity relationships, process traces. All of this context lives at the level of the organization, the team, or the system. None of it lives at the level of the individual user.

This is not a minor omission. It is a structural blind spot that limits the entire value proposition.

Consider what a context graph captures about a product decision. The graph might record that the pricing model was changed on January 15, that the decision was made by the VP of Product, that it affects enterprise accounts, and that the support team was notified on January 18. This is rich organizational context. An AI agent querying this graph can provide accurate, well-sourced answers about the pricing change.

But what the graph does not capture is that Sarah in Solutions Engineering still believes the old pricing applies. That Marcus in Customer Success has not yet informed his three enterprise accounts. That the new hire in Sales has never even heard of the old pricing model. Three users, three completely different epistemic states, all invisible to the context graph.

The AI powered by this graph will give all three users the same response. Technically accurate. Organizationally aware. Personally oblivious.

Context Graph (Organizational Layer Only)

  • ×Captures decisions, workflows, and data lineage across systems
  • ×Models entities, relationships, and temporal changes
  • ×AI agents can query rich organizational context
  • ×Every user receives the same contextually-grounded response

Context Graph + Epistemic Layer (User-Facing)

  • All organizational context preserved as foundation
  • Adds per-user belief models: what each person knows, believes, needs
  • AI detects when a user operates on stale or incorrect beliefs
  • Response adapts to the individual, not just the organization

TrustGraph and the Reification Hint

TrustGraph, the project led by @TrustSpooky, offers a concept that points toward the missing layer even if it does not build it: reification.

In the TrustGraph framework, reification means adding metadata to graph relationships. Not just “Entity A is connected to Entity B,” but adding provenance (where did this relationship come from?), confidence (how reliable is it?), and drift (how has it changed over time?). Reification turns static graph edges into living, qualified relationships.

This maps directly to what user belief models need. A user’s belief that “the old pricing still applies” has provenance (they learned it during onboarding six months ago), confidence (they are fairly certain, 0.82), and drift (this belief was accurate when formed but has since become incorrect). The conceptual machinery for belief modeling already exists in the graph community. What does not exist is the application of that machinery to individual users.

TrustGraph applies reification to organizational knowledge. The epistemic layer applies reification to user understanding. Same concept, different subject. And the difference matters: organizational knowledge reification improves data quality. User belief reification enables personalization.

The parallel is not accidental. The same properties that make organizational context trustworthy (provenance, confidence, drift tracking) are exactly the properties that make user models useful. If you already believe that reification makes your knowledge graph more valuable, you should believe that reification of user beliefs makes your AI more personal. The conceptual leap is small. The implementation gap is large.

0
reification properties: provenance, confidence, drift
0
example belief confidence score for a user operating on stale info
0%
of user beliefs with provenance older than 90 days show drift

Temporal Graphs and the Time Problem

Zep and their Graphiti framework represent another piece of the puzzle. Temporal knowledge graphs track how facts change over time. Not just “what is true now” but “what was true when, what changed, and in what sequence.” This temporal dimension is critical for organizational context because enterprise reality is not static. Pricing changes. Features ship. Teams reorganize. Policies update.

Temporal graphs solve the organizational staleness problem. But they do not solve the user staleness problem, and these are fundamentally different challenges.

The organizational staleness problem asks: “Has the fact changed since it was recorded?” A temporal graph handles this well. The pricing changed on January 15. The graph knows this. Any agent querying the graph gets the current pricing.

The user staleness problem asks: “Does this user know the fact has changed?” A temporal graph cannot answer this. The graph knows the pricing changed. It does not know whether Sarah knows. The delta between what is true in the graph and what is true in the user’s mental model is the epistemic gap. And this gap is where bad AI experiences live.

A temporal knowledge graph paired with an epistemic layer creates a uniquely powerful combination. The temporal graph tracks fact drift (what changed in the world). The epistemic layer tracks belief drift (what changed in the user’s understanding). Together, they can detect the most dangerous state in enterprise AI: a user who is confidently operating on beliefs that were once correct but are no longer true. Neither system can detect this alone.

temporal-epistemic-detection.ts
1// Temporal graph: tracks organizational fact changesWhat changed in the world
2const factChange = await temporalGraph.getChange({
3 entity: 'pricing_model',
4 changedAt: '2026-01-15',
5 previousValue: 'per-seat',
6 currentValue: 'usage-based'
7});
8
9// Epistemic layer: tracks user belief stateWhat the user still believes
10const userBelief = await clarity.getSelfModel(userId);
11// Returns:
12// - Belief: 'pricing is per-seat' (confidence: 0.82)
13// - Provenance: onboarding session, 6 months ago
14// - Last validated: never updated since formation
15
16// Combined: detect dangerous epistemic gapStale belief detection
17const gap = detectEpistemicGap(factChange, userBelief);
18// gap.type = 'stale_belief'
19// gap.severity = 'high' (user has CFO meeting next week)
20// gap.action = 'proactively correct before meeting'

The Agent Memory Adjacent Space

Mem0 and Letta occupy an adjacent but distinct space. These are agent memory systems that give AI agents persistence across conversations. Mem0 stores facts extracted from conversations. Letta provides long-term memory architectures for agents. Both are valuable. Neither solves the epistemic layer problem.

The distinction matters because agent memory and user models serve different purposes. Agent memory answers: “What has this agent learned from past interactions?” User models answer: “What does this user believe, know, and need right now?”

Agent memory is agent-centric. It improves the agent’s recall. User models are user-centric. They improve the agent’s understanding. An agent with perfect memory of every past conversation still does not know what the user currently believes about the pricing model unless the user explicitly stated it. User beliefs are inferred from patterns of behavior, not extracted from explicit statements. The belief that “the old pricing applies” is rarely stated aloud. It is revealed through actions: the user quotes old pricing to a prospect, or skips the pricing update email, or asks a question that only makes sense under the old model.

This is why memory systems and belief models are complementary, not competing. Mem0 gives your agent memory. Clarity gives your agent understanding. The agent needs both, but they are architecturally different systems solving architecturally different problems.

The Epistemic Layer: What It Actually Is

The epistemic layer is a structured, per-user representation of three things: beliefs (what the user holds to be true, with confidence scores), knowledge state (what the user knows and does not know, with gap detection), and goals (what the user is trying to accomplish, with temporal context).

Unlike organizational context, which is derived from system data, epistemic context is inferred from user behavior and updated continuously. Unlike agent memory, which stores conversation history, the epistemic layer maintains a living model that evolves with every interaction.

The key architectural insight is that the epistemic layer does not replace the context graph. It sits on top of it. The context graph provides the ground truth about the organizational world. The epistemic layer provides the per-user lens through which that ground truth should be interpreted and delivered.

This layered architecture means the epistemic layer benefits from every improvement to the underlying context graph. Better organizational context means more opportunities to detect epistemic gaps. More entity relationships mean more dimensions along which user understanding can be modeled. The two systems compound each other.

LayerModelsExampleUpdate CadenceKey Players
Context GraphOrganizational facts”Pricing changed Jan 15”When domain changesCognee, Graphlit, Neo4j
Temporal ExtensionFact changes over time”Pricing was per-seat, now usage-based”When facts changeZep/Graphiti
Reification MetadataRelationship quality”This edge has 0.94 confidence”With each validationTrustGraph
Agent MemoryConversation history”User asked about pricing last week”Each conversationMem0, Letta
Epistemic LayerUser belief state”User believes old pricing applies (0.82)“Every interactionClarity
0
layers in a complete enterprise context architecture
0
layer that models the individual user
0
existing layers that compound with the epistemic layer
0
enterprise deployments reviewed that had the epistemic layer

Why Nobody Builds This

There are structural reasons the context graph ecosystem has not built the epistemic layer.

Different buyer, different budget. Context graph infrastructure sells to data engineering, platform teams, and IT leadership. User-level personalization sells to product teams, AI teams, and customer success. The buyers are different. The budgets are different. The evaluation criteria are different. Context graph vendors optimizing for their existing buyers have no incentive to build for a buyer they do not serve.

Different data model entirely. Organizational context is factual, shared, and relatively stable. User epistemic state is inferential, personal, and constantly changing. Storing beliefs with confidence scores, provenance, and drift metadata requires a fundamentally different data architecture than storing entity relationships. The technical investment to support both models in one system is substantial, and context graph vendors would rather deepen their organizational context capabilities.

Different privacy regime. Organizational context is, by definition, organizational. It can be shared broadly within the company. User belief models are personal. They require individual consent, granular access controls, and the ability for users to inspect and correct their own models. The privacy architecture is not just different in degree. It is different in kind. And privacy infrastructure is expensive to build.

Different inference methodology. Context graph data is derived from systems of record: CRMs, product analytics, support systems, documentation. User beliefs are inferred from behavioral signals: what they click, what they skip, what they ask, what they get wrong. The inference pipeline for belief models looks nothing like the ingestion pipeline for context graphs. It requires ML models for belief extraction, Bayesian updating for confidence management, and temporal decay models for staleness detection.

These are not technical limitations. They are market structure limitations. The epistemic layer will not come from context graph vendors. It will come from a purpose-built system designed for user-level understanding from the ground up.

The Trillion-Dollar Correction

Foundation Capital is right that context graphs are a trillion-dollar opportunity. But the opportunity is not just in organizational context infrastructure. It is in the complete context stack, from organizational facts to individual understanding.

The current context graph ecosystem, Cognee, Graphlit, TrustGraph, Zep/Graphiti, and others, is building the bottom layers of this stack. They are solving real problems: decision trace capture, temporal fact management, relationship reification, entity resolution at scale. This work is necessary. It is not sufficient.

The sufficiency comes from the epistemic layer. When a context graph can tell the AI not just “the pricing changed” but “this user does not know the pricing changed and has a client meeting tomorrow,” the system crosses from organizational awareness into individual understanding. That crossing is where the personalization value lives.

The trillion-dollar opportunity is real. But the trillion-dollar outcome requires all five layers, not just four.

The Thesis as Written

  • ×Context graphs capture organizational decisions and workflows
  • ×Enterprise AI gains structured access to institutional knowledge
  • ×AI agents become organizationally aware
  • ×Trillion-dollar infrastructure opportunity

The Thesis Completed

  • Context graphs capture organizational decisions and workflows
  • Epistemic layer captures individual user beliefs and knowledge gaps
  • AI agents become both organizationally aware and personally intelligent
  • Trillion-dollar infrastructure plus trillion-dollar personalization

What to Do Next

  1. Map your context architecture against all five layers. Most enterprise AI teams have invested in one or two layers (typically RAG plus some form of knowledge graph). Identify which layers you have, which you are missing, and where the epistemic gap is largest.

  2. Evaluate your context graph vendor for user-level extensibility. If you are using or evaluating Cognee, Graphlit, Neo4j, or similar infrastructure, ask whether the system can support per-user context that is inferential, confidence-weighted, and continuously updated. If the answer is no (and it usually is), you need a separate layer.

  3. Start with the highest-value epistemic signals. You do not need to model every user belief from day one. Start with expertise level (does this user need the beginner explanation or the expert explanation?), knowledge currency (is this user operating on stale information?), and current goal (is this user exploring, evaluating, or executing?). These three signals alone transform generic AI interactions into personal ones.

  4. Layer the epistemic model on top of your existing infrastructure. The epistemic layer does not replace your context graph. It reads from it. It enriches it. It provides the per-user interpretation layer that makes the organizational context actionable at the individual level. Clarity provides this epistemic layer as infrastructure, self-models that integrate with any context graph, RAG system, or agent framework.


The context graph ecosystem is building a trillion-dollar foundation. The epistemic layer is what turns that foundation into a building people actually want to live in. Add the missing layer.

References

  1. scarce resource with a finite “attention budget”
  2. context engineering
  3. memory vs. retrieval augmented generation
  4. lack persistent memory about the users and organizations they serve
  5. Atkinson-Shiffrin model

Building AI that needs to understand its users?

Talk to us →
The Clarity Mirror

What did this article change about what you believe?

Select your beliefs

After reading this, which resonate with you?

Stay sharp on AI personalization

Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.

Daily articles on AI-native products. Unsubscribe anytime.

Robert Ta

We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.

Subscribe to Self Aligned →