Skip to main content

Context Graphs Will Commoditize. User Models Won't.

Infrastructure commoditizes. Every technology wave proves it. Context graph capabilities like entity extraction, relationship mapping, and graph traversal are already converging across vendors. What does not commoditize is the interpretation layer: understanding what data means for each specific user. User models are the durable moat.

Robert Ta's Self-Model
Robert Ta's Self-Model CEO & Co-Founder 847 beliefs
· · 10 min read

TL;DR

  • Context graph infrastructure (entity extraction, relationship mapping, graph traversal) is converging across vendors and will commoditize within 2-3 years, following the same pattern as storage, compute, and databases
  • User models are inherently non-commodity: they are unique per-user, compound with every interaction, and create switching costs that grow over time
  • The durable moat in the context graph ecosystem is not the graph. It is the interpretation layer that understands what graph data means for each specific user.

Infrastructure commoditizes. This is not a prediction. It is the lesson of every technology wave in the last two decades. Storage commoditized. Compute commoditized. Databases commoditized. The same pattern is now unfolding in the context graph space, and most companies building on graph infrastructure are not prepared for what happens when the plumbing becomes interchangeable. This post maps the commoditization pattern, identifies what remains durable, and explains why user models sit on the non-commodity side of the value chain.

0+
context graph vendors with converging core capabilities
0%
or less differentiation in entity extraction accuracy across vendors
0-3 yrs
estimated time to full infrastructure commoditization
0
vendors that can replicate accumulated user understanding

The Commoditization Pattern Is Predictable

Every infrastructure layer follows the same arc. A breakthrough capability emerges. Early vendors differentiate on raw performance. Competitors catch up. Features converge. Pricing compresses. The capability becomes a utility.

Storage. Amazon S3 launched in 2006 as a novel capability: reliable, scalable object storage via API. Within a decade, Google Cloud Storage, Azure Blob Storage, Backblaze B2, Wasabi, and a dozen others offered functionally equivalent services. Storage is now a commodity. Nobody builds a company moat on which object store they use.

Compute. AWS EC2 was revolutionary in 2006. By 2015, compute-on-demand was available from every major cloud provider at comparable price points. The differentiation moved up the stack to orchestration, serverless, and managed services. Raw compute became a utility input.

Databases. PostgreSQL, MySQL, and their managed variants (RDS, Cloud SQL, PlanetScale, Neon) are functionally interchangeable for most workloads. The database layer commoditized. The value migrated to what you build on top of it.

The pattern is consistent: infrastructure differentiates early, converges quickly, and commoditizes within a technology generation. The value then migrates to the layer above the infrastructure, the layer that interprets, personalizes, and creates unique outcomes from commodity inputs.

Early in the Cycle (Differentiation)

  • ×Few vendors, large capability gaps
  • ×Performance differences drive purchasing decisions
  • ×Infrastructure choice is a strategic advantage
  • ×High margins, limited competition

Late in the Cycle (Commodity)

  • Many vendors, converging capabilities
  • Price and convenience drive purchasing decisions
  • Infrastructure choice is a procurement decision
  • Compressed margins, intense competition

Context Graphs Are Entering the Commodity Phase

The signs of commoditization in the context graph space are already visible.

Vendor convergence. Cognee, Graphlit, TrustGraph, and Neo4j all offer entity extraction, relationship mapping, graph storage, and traversal as core capabilities. The APIs differ. The data models vary slightly. But the functional output, a queryable graph of entities and relationships, is converging across all of them. Zep, Mem0, and LangGraph are approaching the same space from the memory and agent tooling angle. The capability surface is filling in from multiple directions.

Open-source pressure. Neo4j’s community edition, Apache AGE (graph extension for PostgreSQL), and open-source embedding and extraction pipelines mean that the core graph capabilities are freely available. When the open-source baseline is strong, the proprietary ceiling compresses.

LLM provider integration. OpenAI, Anthropic, and Google are building memory, tool use, and context management directly into their platforms. When the foundational model providers start absorbing graph-adjacent capabilities, standalone graph infrastructure vendors face margin pressure from above and open-source pressure from below.

Feature parity in core operations. Entity extraction accuracy, relationship inference quality, and graph traversal performance differ by less than 15% across the leading vendors. For most enterprise workloads, these differences are not material. The infrastructure is becoming interchangeable.

This does not mean context graphs are unimportant. It means they are becoming table stakes. The graph is necessary infrastructure, just as S3 is necessary infrastructure for modern applications. But necessary and differentiating are not the same thing.

0+
vendors converging on core graph capabilities
0+
open-source graph alternatives with production-grade quality
0
LLM providers building memory/context into their platforms

Why User Models Do Not Follow the Same Pattern

User models resist commoditization because they possess three properties that infrastructure lacks: uniqueness, compounding, and switching costs.

Property 1: Uniqueness

A context graph schema can be standardized. Entity types, relationship categories, and traversal patterns are domain-specific but structurally repeatable. Two companies in the same industry can use nearly identical graph schemas.

User models cannot be standardized because they are unique per-user by definition. A user model captures what a specific individual believes, knows, needs, and has experienced. No two user models are alike because no two users are alike. You cannot templatize a user model the way you templatize a graph schema.

This uniqueness means user models cannot be commoditized in the traditional sense. There is no generic “user model service” that works across users because the entire value is in the per-user specificity. A vendor can provide the platform for building and maintaining user models. But the models themselves belong to the accumulated relationship between the product and each user.

Property 2: Compounding

Infrastructure does not improve with use. An S3 bucket that has stored objects for five years does not store objects better than a new bucket. A PostgreSQL database with years of queries is not smarter than a fresh instance.

User models compound. Every interaction adds signal. Every belief update refines understanding. A user model after 500 interactions has qualitatively different depth than a user model after 5 interactions. The model learns that this user prefers technical depth over executive summaries, that they are skeptical of ROI claims but responsive to architectural diagrams, that their understanding of the product’s pricing model is outdated because they onboarded before the last pricing change.

This compounding creates a widening gap between incumbents and challengers. A new entrant can replicate the graph infrastructure. They cannot replicate three months of accumulated user understanding. The longer the user model has been learning, the harder it is to match.

Property 3: Switching Costs

Migrating between context graph vendors is an engineering project. Export entities and relationships from Vendor A, transform the schema, import into Vendor B. It takes days or weeks. Annoying, but achievable.

Migrating user models is fundamentally different. The model is not a static data export. It is the accumulated output of thousands of interactions, belief updates, and contextual inferences. You can export the data. You cannot export the learning. Re-accumulating user understanding requires re-running the interactions, which means waiting for users to interact with the new system for months before the models reach equivalent depth.

The switching cost of infrastructure is measured in migration hours. The switching cost of user models is measured in months of re-accumulated understanding.

Context Graph (Commodity Trajectory)

  • ×Standardized schemas, repeatable across companies
  • ×No improvement with use over time
  • ×Switching cost: days to weeks of migration
  • ×Value: accurate data, interchangeable across vendors

User Model (Non-Commodity)

  • Unique per-user, cannot be templatized
  • Compounds with every interaction
  • Switching cost: months of re-accumulated understanding
  • Value: per-user intelligence, grows with usage

The Value Migration Is Already Happening

In every infrastructure wave, the value migrates from the plumbing to the interpretation layer. The pattern is consistent:

Storage wave. S3 commoditized. The value migrated to what you build on top of storage: data lakes, analytics platforms, content delivery networks. Nobody differentiates on which object store they use. They differentiate on what they do with the stored data.

Compute wave. EC2 commoditized. The value migrated to orchestration (Kubernetes), serverless (Lambda), and the applications running on compute. The compute is interchangeable. The workloads are not.

Database wave. Managed PostgreSQL commoditized. The value migrated to the data models, access patterns, and application logic built on top of the database. The database is a utility. The schema and queries are proprietary.

Context graph wave. Graph infrastructure is commoditizing. The value is migrating to the interpretation layer: understanding what the graph data means for each specific user. The graph will become a utility input. The user model will become the proprietary asset.

Companies that recognize this pattern early invest in the interpretation layer while competitors are still optimizing the plumbing. Companies that recognize it late find themselves competing on price for commodity infrastructure.

The Commoditization Stack

LayerExampleCommoditized?Where Value Lives
StorageS3, GCS, Azure BlobYesData platforms built on top
ComputeEC2, GCE, Azure VMsYesOrchestration and applications
DatabasesRDS, Cloud SQL, PlanetScaleYesData models and application logic
Context GraphsNeo4j, Cognee, GraphlitIn progressUser models and interpretation

What This Means for Enterprise Buyers

If you are evaluating context graph infrastructure, the commoditization pattern has direct implications for how you allocate investment.

Treat graph infrastructure as a procurement decision, not a strategic one. Choose the vendor that fits your stack, your team’s expertise, and your budget. Do not over-invest in optimizing which graph database you use. The differences are shrinking and will continue to shrink. In three years, the choice between graph vendors will feel like the choice between cloud storage providers today: relevant for operational reasons, irrelevant for strategic differentiation.

Invest in the interpretation layer now. The strategic asset is not the graph. It is the per-user understanding that sits on top of the graph. Start accumulating user models early because their value compounds with time. Every month you delay is a month of user understanding you do not accumulate.

Evaluate vendors on user-level intelligence, not graph capabilities. When comparing platforms, ask: “What does your system know about individual users after 30 days of interaction? After 90 days? After a year?” If the answer is limited to session history and basic preferences, you are looking at infrastructure, not intelligence.

Build for portability at the graph layer, stickiness at the user layer. Your graph should be portable. Use open standards, avoid proprietary lock-in, maintain the ability to switch vendors. Your user models should be sticky. Invest in depth, coverage, and compounding. The portability of your infrastructure and the stickiness of your intelligence layer is the optimal architecture.

where-value-compounds.ts
1// Graph infrastructure: commodity inputInterchangeable
2const graphContext = await anyGraphVendor.query({
3 entity: 'account_x',
4 depth: 3,
5 include: ['products', 'usage', 'support_history']
6});
7
8// User model: compounding assetNon-commodity
9const userModel = await clarity.getSelfModel(userId);
10// After 500 interactions, this model knows:
11// - User's expertise level and communication preferences
12// - What they believe about the product (including outdated beliefs)
13// - Their decision-making patterns and risk tolerance
14// - Knowledge gaps the AI should proactively address
15// None of this can be replicated by switching graph vendors.
16
17// The graph provides commodity context.Intelligence = non-commodity
18// The user model provides non-commodity intelligence.
19const response = await ai.generate({
20 domain: graphContext, // Any vendor's output
21 user: userModel, // Accumulated understanding
22});

The Competitive Implications

For companies building on context graph infrastructure, the commoditization timeline creates urgency.

If you are a context graph vendor: your core capabilities will be table stakes within 2-3 years. The differentiation opportunity is in moving up the stack toward user-level intelligence before the infrastructure layer compresses your margins. Vendors who stay at the graph layer will compete on price. Vendors who build the interpretation layer will compete on value.

If you are building AI products on top of graph infrastructure: your graph vendor is replaceable. Your user models are not. Prioritize accumulating per-user understanding now, while the compounding advantage is available. The earlier you start building user models, the larger your advantage over competitors who start later.

If you are an enterprise buyer: audit your current stack. How much of your context investment is in commodity infrastructure versus non-commodity intelligence? If the ratio is heavily skewed toward infrastructure, you are investing in the layer that will lose value over time rather than the layer that gains value over time.

Trade-offs

Compounding takes time. User models are more valuable at month 6 than at month 1. If you need differentiation immediately, the graph layer is where you can optimize today. The user model advantage is a medium-term play.

User models require interaction data. Products with low engagement frequency accumulate user understanding slowly. High-frequency interaction products (daily-use tools, support platforms, learning systems) compound faster than low-frequency products (quarterly reports, annual reviews).

Privacy complexity increases. Graph data is typically organizational (Account X uses Feature Y). User model data is personal (User Z believes the old pricing applies). The governance, consent, and compliance requirements are more stringent for user-level inference data. This is real overhead, but it is also a barrier to entry that reinforces the moat.

The graph still matters. Commoditization does not mean unimportant. You still need reliable graph infrastructure. But you should treat it as operational infrastructure, not strategic differentiation. Choose it wisely. Do not over-invest.

What to Do Next

  1. Map your context investment. Calculate what percentage of your context-related budget goes to graph infrastructure versus user-level intelligence. If the split is 90/10 toward infrastructure, you are over-invested in the layer that will commoditize.

  2. Start accumulating user understanding now. The compounding advantage means every month of delay widens the gap between you and competitors who start earlier. Even basic user models that capture expertise level, knowledge gaps, and communication preferences begin compounding immediately.

  3. Layer user models on your existing graph. You do not need to replace your graph infrastructure. Clarity provides the user intelligence layer that sits on top of any context graph, turning commodity infrastructure into non-commodity per-user understanding.


Your graph infrastructure will be interchangeable in two years. Your accumulated user understanding will not. Start compounding now.

References

  1. Thoughtworks’ strategic framework for evaluating third-party solutions
  2. 2016 survey of 2,000 Americans by Reelgood and Learndipity Data Insights
  3. Product vs. Feature Teams
  4. only 1 in 26 unhappy customers actually complains
  5. cold start problem

Building AI that needs to understand its users?

Talk to us →
The Clarity Mirror

What did this article change about what you believe?

Select your beliefs

After reading this, which resonate with you?

Stay sharp on AI personalization

Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.

Daily articles on AI-native products. Unsubscribe anytime.

Robert Ta

We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.

Subscribe to Self Aligned →