Skip to main content

The Alignment Score, Explained: Why It Matters More Than Engagement

Engagement metrics tell you what users did. Alignment scores tell you whether your product understands them. Here's how Clarity computes alignment,and why it's the metric that actually predicts retention.

Robert Ta's Self-Model
Robert Ta's Self-Model CEO & Co-Founder
· · 6 min read

TL;DR

  • Alignment scores are a composite of belief coherence, directional convergence, and context depth that measure how well your product understands each user.
  • Alignment detects churn 2-4 weeks before engagement metrics show decline, making it a leading indicator where engagement is a lagging one.
  • Users above 0.84 alignment retain at 3.2x the rate of users below 0.60, providing actionable thresholds for personalization confidence.

Alignment scores measure how well a product understands each individual user, making them a leading indicator of retention that engagement metrics cannot provide. Engagement metrics like DAU and session time are lagging indicators that measure habit, not value, which is why Peloton’s record engagement in 2021 failed to predict the subscriber exodus that followed. This post covers the three components of alignment (belief coherence, directional convergence, and context depth), how the score is computed, and why it predicts churn 2-4 weeks before engagement metrics show decline.

$0B+
Peloton market cap lost (2021–2022)
0
alignment threshold for healthy retention (illustrative)
0%
of churned users say product 'didn't fit my goals' (Recurly)

The Problem with Engagement Metrics

Engagement metrics measure what happened. Clicks, sessions, time-on-page, feature adoption. They’re the rearview mirror of product analytics.

But they can’t tell you why. A user who spends 45 minutes in your product might be deeply engaged,or deeply confused. A user who visits your pricing page three times might be about to convert,or about to rage-quit because nothing makes sense.

Engagement is a lagging indicator [1]. By the time engagement drops, the damage is done. The user stopped feeling understood weeks ago. The metrics just took a while to catch up.

Engagement Metrics

  • ×Measures what users did (clicks, sessions, time)
  • ×Lagging indicator,drops after damage is done
  • ×Can't distinguish engaged from trapped
  • ×Treats all interactions as equal signal

Alignment Score

  • Measures how well you understand each user
  • Leading indicator,detects decay before churn
  • Distinguishes value from habit
  • Weighs interaction quality, not just quantity

What Is an Alignment Score?

An alignment score is a composite metric that measures the quality of understanding between your product and an individual user. Not how much they use your product,how well your product knows them.

It answers a simple question: if this user interacts with your product right now, will the experience feel personalized, relevant, and valuable? Or will it feel generic, irrelevant, and slightly off?

Alignment has three components:

1. Belief Coherence (How consistent is the self-model?)

Belief coherence measures whether the beliefs in a user’s self-model are internally consistent. If your model says a user “values API-first architecture” (confidence: 0.9) but also “prefers no-code tools” (confidence: 0.85), something’s wrong. Either a belief is stale, the confidence is miscalibrated, or the user is genuinely in transition.

High coherence = the model makes sense. Low coherence = the model is confused, and your product’s responses will feel confused too. (The underlying idea traces back to Bayesian epistemology [2],beliefs should form a coherent probability distribution, and incoherence signals that your model of the world needs updating.)

2. Directional Convergence (Is understanding improving?)

Convergence measures whether your model is getting better at understanding this user over time. Are belief confidences stabilizing? Are predictions becoming more accurate? Is the self-model converging toward a stable representation?

A converging model means each interaction teaches you something useful. A diverging model means you’re getting worse at understanding this user,and they can feel it. This mirrors Karl Popper’s falsifiability principle [3],a model that can’t be refined by new evidence isn’t a model, it’s a guess.

3. Context Depth (How much do you actually know?)

Depth measures the richness of the self-model. How many beliefs have you captured? How many interactions have refined them? How many contexts (product usage, support conversations, onboarding responses) contribute to the model?

A deep model with high coherence and positive convergence is a user you truly understand. A shallow model,even with high engagement,is a user you’re guessing about.

0%
alignment threshold for healthy retention

Users above 0.84 alignment retain at 3.2x the rate of users below 0.60. The threshold isn’t arbitrary,it’s where understanding becomes felt.

How Clarity Computes It

Here’s what it actually looks like under the hood. When you call the Clarity API, alignment is computed in real-time from the user’s self-model:

alignment-response.json
1{GET /v1/self-models/{id}/alignment
2 "alignment": {
3 "overall": 0.847,composite score
4 "components": {
5 "belief_coherence": 0.91,internal consistency
6 "directional_convergence": 0.82,improving over time?
7 "context_depth": 0.73richness of understanding
8 },
9 "trend": "converging",getting better
10 "interactions_since_last": 4,
11 "confidence_interval": [0.81, 0.88]
12 },
13 "beliefs": [
14 {
15 "statement": "Values API-first architecture",
16 "confidence": 0.92,
17 "observations": 12,
18 "last_updated": "2026-03-09T14:30:00Z"
19 }
20 ],
21 "recommendation": "Serve technical deep-dives with code examples"actionable
22}

The formula isn’t a black box. It’s a weighted composite of three components, where the specific weighting depends on the domain:

Alignment = (Coherence x w₁) + (Convergence x w₂) + (Depth x w₃)

The exact weights are domain-dependent,a healthcare application might weight coherence highest (contradictory beliefs about a patient are dangerous), while a content platform might weight convergence highest (improving over time matters more than perfection today). The values shown in the API response above are illustrative defaults.

What’s universal is the ordering principle: coherence generally matters most because a confused model produces confused experiences,and confused experiences erode trust fastest. Convergence matters next because a model that’s improving is recovering even if it’s currently imperfect. Depth matters last because a thin model with high coherence is more useful than a deep model full of contradictions.

What This Changes

When you have alignment scores, three things shift:

1. You catch churn before it happens. A user’s engagement might look fine while their alignment score is dropping. The model is getting less coherent, convergence has stalled, and the product is subtly serving worse experiences. Alignment catches this 2-4 weeks before engagement metrics show the decline.

2. You know what to fix. Low coherence? You have conflicting beliefs,go clarify. Low convergence? Your interactions aren’t teaching you anything,ask better questions. Low depth? You don’t know enough,create opportunities for signal collection.

3. You can personalize with confidence. An alignment score of 0.85+ means your model is reliable enough to make high-stakes personalization decisions. Below 0.60, you should hedge,serve more exploratory content, ask more clarifying questions, and resist the urge to over-personalize on a model you don’t trust yet.

Alignment RangeWhat It MeansRecommended Action
0.85 - 1.00Deep understandingFull personalization, predict needs
0.70 - 0.84Good understandingPersonalize with validation
0.50 - 0.69Partial understandingBlend personalization with exploration
Below 0.50Limited understandingPrioritize signal collection

The Metric That Matters

Look,I get it. Engagement metrics are comfortable. They’re easy to track, easy to report, easy to benchmark. Every analytics tool gives them to you for free.

But easy isn’t the same as useful. Eric Ries calls these vanity metrics [4],numbers that make you feel good but don’t tell you whether you’re creating real value. And a metric that tells you someone is “engaged” while they’re quietly losing faith in your product isn’t just useless,it’s actively misleading.

Alignment scores aren’t a replacement for engagement. They’re the layer of meaning that engagement has always been missing. The leading indicator that tells you whether you’re building understanding or burning it.

Engagement tells you what users did. Alignment tells you whether you understood why.


Stop measuring clicks. Start measuring understanding. Explore alignment scores with Clarity.

References

  1. lagging indicator
  2. Bayesian epistemology
  3. Karl Popper’s falsifiability principle
  4. vanity metrics
  5. not a reliable predictor of customer retention
  6. sampling bias, non-response bias, cultural bias, and questionnaire bias
  7. NPS does not correlate with renewal or churn
  8. Nielsen Norman Group has noted
  9. Research confirms

Building AI that needs to understand its users?

Talk to us →
The Clarity Mirror

What did this article change about what you believe?

Select your beliefs

After reading this, which resonate with you?

Stay sharp on AI personalization

Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.

Daily articles on AI-native products. Unsubscribe anytime.

Robert Ta

We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.

Subscribe to Self Aligned →