How Clarity Uses Clarity: Our Own Dogfooding Story
We built a self-model API. Then we used it on ourselves. Here is how Clarity uses Clarity to inform marketing decisions, prioritize features, and understand our own customers, and what broke when we tried.
TL;DR
- We dogfood our own self-model API on the founding team, building founder self-models that track beliefs, customer conversations, and strategic context
- Within two weeks, the system surfaced a strategic contradiction I had carried for months: simultaneously believing we should focus on enterprise AND optimize for developer self-serve
- The dogfooding process revealed three failure modes that external testing never caught: belief oscillation under uncertainty, context collapse when domains overlap, and confidence inflation from repetitive observations
Dogfooding a self-model API on the founding team reveals failure modes that external testing never surfaces, because internal users have the motivation and context to push the system to its limits. Within two weeks of building a founder self-model, the system surfaced a strategic contradiction between enterprise focus and developer self-serve that had gone unnoticed for months. This post covers how the founder self-model works, three failure modes discovered through dogfooding, and how belief coherence scores now drive content strategy.
The Founder Self-Model
My self-model tracks six domains: Product-Market Fit, Revenue Growth, Content Strategy, Engineering Vision, Growth Experiments, and Stakeholder Development. Each domain contains beliefs, structured representations of what I think is true, with confidence scores and observation histories.
When I have a customer meeting, the system records it as an episode and updates relevant beliefs. When I write a blog post, it analyzes the content and checks for consistency with existing beliefs. When I make a strategic decision, it records the decision context and the beliefs that influenced it.
The system does not tell me what to believe. It shows me what I already believe, with more honesty than my own introspection provides.
Decision-Making Without Self-Model
- ×Strategic priorities based on recent conversations (recency bias)
- ×Contradictions between stated strategy and actual decisions
- ×Content topics chosen by intuition and competitive pressure
- ×No systematic tracking of belief evolution over time
Decision-Making With Founder Self-Model
- ✓Strategic priorities weighted by belief coherence and observation depth
- ✓Contradictions surfaced automatically when belief coherence drops
- ✓Content topics driven by belief gaps and alignment scores
- ✓Clear record of how founder thinking evolves with evidence
The Contradiction That Changed Our Strategy
Two weeks after starting the founder self-model, the system flagged something I had not noticed. My belief coherence score in the Product-Market Fit domain dropped from 0.86 to 0.71. The system identified why: I held two conflicting beliefs with roughly equal confidence.
Belief 1: “Enterprise customers with 10K+ users are our ideal customer profile because self-models deliver the most value at scale.”
Belief 2: “Developer self-serve adoption is the fastest path to product-market fit because it lets us iterate on the core technology with minimal sales overhead.”
These beliefs are not necessarily incompatible. But the way I was acting on them was. I was spending Monday through Wednesday on enterprise sales processes and Thursday through Friday on developer experience improvements. The engineering team was getting conflicting signals about whether to optimize for enterprise features (SSO, audit logs, compliance) or developer features (CLI tools, local development, quick-start templates).
The self-model did not resolve the contradiction for me. It made it visible. And once visible, I could address it deliberately rather than oscillating unconsciously between two strategies.
We chose enterprise-first with a developer-friendly integration layer. Not because the self-model told us to, but because the self-model forced us to make a choice we had been avoiding.
How We Use the Self-Model for Content Strategy
This blog is informed by the founder self-model. Here is how.
Every belief in the model has a coherence score: how consistent it is with other beliefs in the same domain. When coherence is low, it means I have unresolved tensions about that topic. Those tensions make excellent blog posts because writing forces resolution. This post exists because my self-model flagged low coherence around “dogfooding methodology.”
When coherence is high, it means I have a clear, evidence-backed position. Those positions make excellent case studies and thought leadership because the conviction comes through in the writing. Our most shared blog posts have been the ones written from high-coherence beliefs.
The alignment score tells me something else: how well my beliefs match the evidence from customer conversations and market signals. When alignment is low, it means I believe something that the evidence does not support, which is a prompt to update my thinking, not to write a confident blog post about it.
1// Fetch founder context for content decisions← The self-model drives content2const founderModel = await clarity.getSelfModel(founderId);34// Find low-coherence beliefs: good blog post candidates5const tensions = founderModel.beliefs.filter(6b => b.coherence < 0.75 && b.observations > 57);8// These are unresolved , writing forces resolution910// Find high-coherence beliefs: good case study candidates11const convictions = founderModel.beliefs.filter(12b => b.coherence > 0.90 && b.confidence > 0.8513);14// These are resolved , conviction comes through in writing1516// Check alignment: does evidence support the belief?17const misaligned = founderModel.beliefs.filter(18b => b.alignment < 0.7019);20// These need investigation, not blog posts
Three Failure Modes We Discovered
Dogfooding revealed three failure modes that external testing never surfaced. Each one led to an architectural improvement.
Failure Mode 1: Belief Oscillation. Under uncertainty, the self-model would oscillate between contradictory beliefs on consecutive days. Monday’s customer meeting would push a belief toward “enterprise first.” Tuesday’s developer feedback would push it back toward “self-serve first.” The model was faithfully tracking my changing inputs but creating noise rather than signal.
The fix: we added temporal smoothing to belief updates. New observations update beliefs gradually, weighted by observation quality and recency. This prevents one strong signal from overriding weeks of accumulated evidence while still allowing genuine belief changes to propagate.
Failure Mode 2: Context Collapse. When two domains overlapped, for example, a customer conversation that touched both Revenue Growth and Product-Market Fit: the system would sometimes attribute observations to the wrong domain. A revenue-focused insight would update a product-market fit belief, creating false coherence or false contradiction.
The fix: we added multi-domain observation routing with explicit context tagging. Each observation can contribute to multiple domains, but the contribution is weighted by relevance rather than applied uniformly.
Failure Mode 3: Confidence Inflation. Repetitive observations (multiple meetings with the same customer reinforcing the same point) inflated confidence scores beyond what the evidence warranted. Hearing the same thing ten times from one customer does not make it ten times more likely to be true.
The fix: we added source diversity weighting. Confidence increases faster from diverse sources (different customers, different contexts) than from repeated observations from the same source.
| Failure Mode | Symptom | Root Cause | Fix |
|---|---|---|---|
| Belief oscillation | Beliefs flip daily | No temporal smoothing | Weighted gradual updates |
| Context collapse | Wrong-domain attribution | Overlapping observation contexts | Multi-domain routing with relevance weights |
| Confidence inflation | Artificially high confidence | Repeated same-source observations | Source diversity weighting |
What Broke Our Assumptions
The biggest surprise was not what the self-model got wrong. It was what it got right in ways I did not expect.
The system noticed that my belief confidence about content strategy increased after every blog post I wrote; not because the post generated external validation, but because the act of writing crystallized my thinking. Writing was not just communication; it was a belief-formation mechanism.
It also noticed that my beliefs about customer needs were more accurate when informed by direct customer conversations than when informed by team reports of customer conversations. Secondhand information was losing the nuance that made the beliefs actionable. This changed how I structured my week. I now join two customer calls weekly rather than relying solely on summaries.
These are not insights the system generated through intelligence. They are patterns it detected through persistent, structured observation of my behavior and beliefs. The self-model is not smarter than me. It is more consistent than me at noticing my own patterns.
Trade-offs
Dogfooding self-models is uncomfortable, and not every team should do it the way we did.
Self-model transparency can be destabilizing. Seeing your own contradictions laid out structurally is not always productive. Some strategic tensions are productive, they keep you exploring rather than prematurely converging. Resolving every contradiction the system surfaces is not always the right move.
Founder self-models create single points of dependency. If the team starts relying on the founder’s self-model for strategic decisions, you have concentrated strategic intelligence in a system that depends on one person’s input quality. This does not scale and can create an unhealthy dynamic.
Dogfooding bias is real. When the founder is the primary user, the product naturally optimizes for the founder’s use case. We had to actively resist the urge to build features specifically for our own dogfooding workflow rather than for our actual customers.
The time investment is nontrivial. Maintaining a useful founder self-model requires consistent input, logging customer conversations, reviewing belief updates, correcting misattributions. This is 20-30 minutes per day that could be spent on other things.
What to Do Next
-
Start with a belief audit. Before building a self-model, write down your top 10 strategic beliefs about your product, customers, and market. Rate your confidence in each. Then ask your co-founder or team lead to do the same. Compare. The gaps between your lists will tell you more about your strategic alignment than any planning exercise.
-
Pick one domain to model first. Do not try to model everything. Choose the domain where you make the most frequent decisions under uncertainty, for most founders, this is Product-Market Fit or Content Strategy. Clarity can bootstrap a founder self-model from your existing documents and conversations. Build the self-model for that domain and live with it for two weeks before expanding.
-
Use contradictions as content. When your self-model surfaces low coherence, write about it. The act of writing resolves tensions, and the resulting content has an authenticity that planned content lacks. This post exists because my self-model flagged a tension about dogfooding. The tension was real, and resolving it produced something worth sharing.
We built a self-model API. Then we used it on ourselves. The contradictions are where the insights live.
References
- 2016 survey of 2,000 Americans by Reelgood and Learndipity Data Insights
- Scientific American explains
- cold start problem
- Progress Software describes this core tension well
- New America analysis of AI agents and memory
Related
Building AI that needs to understand its users?
What did this article change about what you believe?
Select your beliefs
After reading this, which resonate with you?
Stay sharp on AI personalization
Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.
Daily articles on AI-native products. Unsubscribe anytime.
We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.
Subscribe to Self Aligned →