Skip to main content

Context Graphs Miss the Epistemic Layer

Enterprise context graphs map relationships between entities. But they miss the most important context of all: what each user believes, knows, and needs. The epistemic layer is the missing piece.

Robert Ta's Self-Model
Robert Ta's Self-Model CEO & Co-Founder 847 beliefs
· · 7 min read

TL;DR

  • Enterprise context graphs model entities and relationships (accounts, products, features) but systematically ignore the epistemic state of individual users: what they believe, know, and need
  • This creates a paradox: AI systems with rich domain context still deliver generic experiences because they lack user-level understanding
  • Adding an epistemic layer, user belief models, on top of existing context graphs transforms them from data infrastructure into personalization infrastructure, with 2-3x improvement in AI quality

Context graphs miss the epistemic layer because they model entities and relationships but not the beliefs, knowledge, and goals of individual users. The result is AI systems with rich domain context that still deliver generic experiences to every person. This post covers what the epistemic layer is, why context graph vendors do not build it, and how to add user-level understanding on top of your existing graph infrastructure.

0M
entity relationships in a typical enterprise context graph
0
data points about each individual user
0x
improvement in AI quality when epistemic layer is added
0
context graph vendors that model user epistemic state

What Context Graphs Get Right

Context graphs are powerful and their value is real. Let me be clear about what they do well before discussing what they miss.

Entity resolution. Context graphs connect disparate data about the same entity. Account X in the CRM, Account X in the product analytics, Account X in the support system, the graph unifies these into a single entity with rich, cross-system context.

Relationship mapping. The connections between entities create navigable context. Which features are most used by accounts in this industry? Which support issues correlate with which product configurations? These relationship paths enable powerful reasoning.

Temporal context. Good context graphs track how entities and relationships change over time. Account X adopted Feature Y six months ago and has since reduced support tickets by 30%. This temporal dimension enables trend analysis and prediction.

RAG enhancement. When used for retrieval-augmented generation, context graphs dramatically improve the relevance of retrieved context. Instead of semantic similarity alone, the graph provides structured relationships that ground the AI’s responses in domain reality.

These capabilities are genuinely valuable. I am not arguing against context graphs. I am arguing that they solve only half the context problem.

Context Graph Without Epistemic Layer

  • ×AI knows Account X uses Feature Y and Z
  • ×AI knows this industry segment has specific patterns
  • ×AI has no idea what this specific user understands or needs
  • ×Same response for a power user and a confused beginner

Context Graph With Epistemic Layer

  • AI knows Account X uses Feature Y and Z
  • AI knows this user is confused about Feature Y pricing
  • AI knows this user is expert-level on Feature Z
  • Response adapts depth, tone, and focus to the individual

The Epistemic Layer Defined

The epistemic layer is a structured model of what each user believes, knows, and needs. It sits on top of the context graph and provides the user-level context that the graph lacks.

Where the context graph models the world (entities, relationships, facts), the epistemic layer models the user’s relationship to the world (beliefs about entities, knowledge of relationships, confusion about facts).

This distinction is critical. The context graph might know that Feature Y has a new pricing model. The epistemic layer knows that this specific user does not know about the pricing change, believes the old pricing still applies, and has a meeting with their CFO next week where the pricing will be discussed. The context graph provides the what. The epistemic layer provides the who-needs-what-and-why.

epistemic-layer.ts
1// Context graph: rich domain knowledgeModels the world
2const domainContext = await knowledgeGraph.query({
3 entity: 'account_x',
4 relationships: ['uses_features', 'support_history', 'industry']
5});
6
7// Epistemic layer: user-level understandingModels the user
8const userContext = await clarity.getSelfModel(userId);
9// Returns:
10// - Belief: User thinks old pricing applies (0.82 confidence)
11// - Knowledge gap: Unaware of new pricing model
12// - Goal: Preparing for CFO meeting next week
13// - Expertise: Expert on Feature Z, beginner on Feature Y
14
15// Combined: personalized AI responseWorld context + user context
16const response = await ai.generate({
17 domain: domainContext, // What is true about the world
18 user: userContext, // What this user needs to know
19 // AI now knows to proactively explain the pricing change
20 // in a way that helps the user prepare for their CFO meeting
21});

The Three Components of the Epistemic Layer

An epistemic layer tracks three categories of user-level context.

Beliefs. What does the user believe to be true? This includes correct beliefs (the user knows Feature Z supports batch processing), incorrect beliefs (the user thinks the old pricing still applies), and uncertain beliefs (the user is not sure whether Feature Y integrates with their CRM). Beliefs have confidence scores that reflect how strongly the user holds them.

Knowledge state. What does the user know and not know? This is related to but distinct from beliefs. A user might have no belief about Feature Y’s new capability, not because they believe it does not exist, but because they have never encountered the information. Knowledge gaps are opportunities for proactive education.

Goals and needs. What is the user trying to accomplish? A user exploring the pricing page has a different need than a user exploring the API documentation, even if they are the same person on the same account. Goals provide the interpretive frame for how domain context should be presented.

Context TypeSourceExampleValue for AI
Entity relationship (graph)Product dataAccount X uses Feature YGrounds responses in domain reality
User belief (epistemic)Interaction inferenceUser thinks old pricing appliesEnables proactive correction
Knowledge gap (epistemic)Absence detectionUser unaware of new pricingEnables proactive education
User goal (epistemic)Intent inferencePreparing for CFO meetingFrames the response appropriately
0/5
AI quality score with epistemic layer (vs 3.1 without)
0
components of the epistemic layer: beliefs, knowledge state, goals
0%
of AI interactions improved by proactive knowledge gap detection

Why Context Graph Vendors Miss This

I have spoken with teams at several enterprise context graph companies. They are aware of the gap but face structural reasons for not addressing it.

Different data model. Context graphs model entities and relationships, static or slowly changing facts about the world. User epistemic state is dynamic, subjective, and confidence-weighted. It requires a fundamentally different data model. Bolting user models onto an entity graph is like storing time-series data in a relational database, technically possible, architecturally wrong.

Different update cadence. The knowledge graph updates when the domain changes (new product, new feature, new account data). The epistemic layer updates with every user interaction. The update velocity differs by orders of magnitude.

Different privacy model. Entity data in a knowledge graph is organizational. User belief models are personal. They require different privacy controls, access patterns, and data governance. The privacy architecture for user models is fundamentally different from the privacy architecture for domain data.

Different value proposition. Context graph companies sell to data engineering and analytics teams. User modeling sells to product and AI teams. Different buyers, different value props, different go-to-market.

Trade-offs

Adding an epistemic layer introduces real complexity.

Inference uncertainty. User beliefs are inferred from interactions, which is inherently noisy. A user who asks about pricing is probably confused about pricing, but they might be researching for a blog post or helping a colleague. Incorrect epistemic inferences can lead to patronizing or irrelevant responses.

Model staleness. User epistemic state changes faster than domain context. A user who was confused about pricing yesterday might have read the documentation today. Epistemic models need aggressive freshness management.

Integration complexity. Making the epistemic layer work with an existing context graph requires bridging two different data models, two different update patterns, and two different access control systems. This is non-trivial integration work.

Cold start for the epistemic layer. The context graph is populated from existing data sources. The epistemic layer starts empty for each user and fills through interaction. There is an awkward period where the AI has rich domain context but no user context.

What to Do Next

  1. Audit your context-to-user ratio. Count the data points your AI system has about the domain vs the data points it has about each individual user. If the ratio is worse than 100:1, you have a severe epistemic gap.

  2. Identify your highest-value epistemic signals. What user-level understanding would most improve your AI’s responses? Usually it is expertise level (beginner vs expert), current goal (exploring vs buying vs troubleshooting), and knowledge gaps (what the user does not know that they should).

  3. Layer epistemic context on top of your existing graph. You do not need to replace your context graph. You need to add a user-level layer. Clarity provides the epistemic layer infrastructure, user belief models that integrate with any existing context graph or RAG system.


Your knowledge graph knows everything about the world. It knows nothing about the user. Add the epistemic layer.

References

  1. 2016 survey of 2,000 Americans by Reelgood and Learndipity Data Insights
  2. Scientific American explains
  3. cold start problem
  4. Progress Software describes this core tension well
  5. New America analysis of AI agents and memory

Building AI that needs to understand its users?

Talk to us →
The Clarity Mirror

What did this article change about what you believe?

Select your beliefs

After reading this, which resonate with you?

Stay sharp on AI personalization

Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.

Daily articles on AI-native products. Unsubscribe anytime.

Robert Ta

We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.

Subscribe to Self Aligned →