Skip to main content

From Engagement to Alignment: The Ethical Shift

Engagement metrics reward addiction. Alignment metrics reward understanding. The next generation of AI products will be measured not by how much time users spend, but by how well the product serves what users actually want.

Robert Ta's Self-Model
Robert Ta's Self-Model CEO & Co-Founder 847 beliefs
· · 7 min read

TL;DR

  • Engagement metrics (DAU, session length, clicks) reward products that capture attention, not products that serve user goals, and the gap between these two objectives is growing wider with AI
  • Alignment metrics measure whether the AI served the user’s stated intent, and they predict long-term retention 2.3x better than engagement metrics
  • The shift from engagement to alignment is both an ethical imperative and a competitive advantage in a market where user trust is the scarcest resource

Engagement metrics reward AI products that capture attention, while alignment metrics reward products that serve user goals efficiently. Alignment metrics predict long-term retention 2.3x better than engagement metrics because they measure whether the AI actually helped the user accomplish what they intended. This post covers the engagement trap, how alignment metrics work, the economic case for the shift, and practical steps for transitioning from attention-based to goal-based measurement.

0%
of users report spending more time with AI than intended
0x
better retention prediction from alignment vs engagement metrics
0%
higher satisfaction when optimizing for alignment
0%
less time needed to reach desired outcome

The Engagement Trap

Engagement metrics were designed for ad-supported products where more time on site equals more revenue. DAU, session length, pages per visit, return frequency, these all measure attention captured. For ad-supported business models, they are reasonable proxies for value delivered.

But AI products are not ad-supported. They are subscription-based, usage-based, or enterprise-licensed. Their revenue does not scale with attention captured. It scales with problems solved.

When an AI product optimizes for engagement, it incentivizes behaviors that increase usage metrics but not user outcomes: generating longer responses when shorter ones would suffice, asking follow-up questions to extend conversations, surfacing tangentially related content to keep users exploring, and creating dependency patterns that make users return more frequently rather than solving problems more efficiently.

None of this is malicious. It is the natural consequence of measuring the wrong thing. When your dashboard shows DAU going up, it feels like the product is succeeding. But if DAU is going up because users need more sessions to accomplish the same goals, engagement and value have decoupled.

What Alignment Metrics Measure

Alignment is a simple concept: did the AI’s output serve what the user actually wanted?

Not what the user clicked on. Not how long they spent. Not whether they came back. Whether the interaction moved the user closer to their stated goal.

Measuring alignment requires knowing the user’s goal, which is why most products do not measure it, they do not ask. They infer intent from behavior and optimize for the behavioral proxies instead of the intent itself.

Self-models solve this by maintaining an explicit representation of what each user is trying to accomplish. When you know the user’s goal, you can measure whether each interaction served it.

Engagement Metrics

  • ×DAU, session length, return visits
  • ×Rewards products that capture attention
  • ×More time spent equals success
  • ×Correlates weakly with user satisfaction after 30 days

Alignment Metrics

  • Goal completion rate, time-to-value, alignment score
  • Rewards products that solve problems efficiently
  • Less time for better outcomes equals success
  • Correlates strongly with retention and NPS at 90 days

The Economic Case for Alignment

This is not just an ethical argument. The economics favor alignment over engagement for subscription and enterprise products.

Engagement-optimized products show strong early metrics, high DAU, long sessions, frequent returns. But longitudinal data tells a different story. Products that optimize for engagement see a satisfaction decay curve: user satisfaction peaks in the first month and declines as the novelty of interaction gives way to frustration with misaligned outputs.

Alignment-optimized products show the opposite curve. Initial engagement is lower, users accomplish their goals faster and leave. But satisfaction increases over time as the product demonstrates that it understands and serves user intent. The trust curve compounds.

In subscription businesses, the second curve wins. Revenue comes from retention, not from attention. A user who spends 10 minutes per week and renews for 3 years is worth dramatically more than a user who spends 2 hours per week for 3 months and churns.

alignment-score.ts
1// Measuring alignment, not engagementA different optimization target
2const alignmentScore = await clarity.measureAlignment(userId, {
3 interaction: sessionId,
4 method: 'goal_completion'
5});
6
7// Returns:Did the AI serve what the user wanted?
8// {
9// score: 0.84,
10// goalProgress: 0.91, // how much closer to stated goal
11// efficiency: 0.78, // outcome quality / time spent
12// beliefAlignment: 0.82, // consistency with user beliefs
13// userConfirmed: true // user validated the outcome
14// }
15
16// Compare: engagement only tells you they stayed for 12 minutes
17// Alignment tells you those 12 minutes served their actual goal

The Ethical Dimension

Beyond the economics, there is a straightforward ethical case. AI products that model user behavior have a responsibility to use that understanding in the user’s interest, not against it.

An AI that knows you tend to over-research before making decisions should help you decide faster, not feed you more research to extend the session. An AI that knows you are procrastinating on a difficult task should help you start, not offer easier tasks that feel productive. An AI that knows you have already found what you need should confirm that and end the interaction, not surface adjacent content to keep you browsing.

This is what alignment means in practice: using the understanding you have of the user to serve their goals, even when doing so reduces engagement.

The companies that embrace this will earn the trust that engagement-optimized competitors destroy. And in a market where every product has access to the same models, trust is the differentiator that compounds.

0x
higher NPS for alignment-optimized vs engagement-optimized AI products
0%
of enterprise buyers cite trust as top selection criterion for AI tools
0x
longer customer lifetime for trust-earning vs attention-capturing products

Making the Transition

Shifting from engagement to alignment is not a one-day switch. It requires changes at the metric layer, the product layer, and the cultural layer.

At the metric layer, replace your primary KPIs. Retire DAU as a north star. Replace it with alignment score: the percentage of interactions where the AI’s output served the user’s stated goal. Add time-to-value: how quickly users accomplish their objectives. Add user-confirmed satisfaction: explicit signals that the interaction was helpful.

At the product layer, build the self-model infrastructure that makes alignment measurable. You cannot measure whether you served a user’s goal if you do not know their goal. Self-models maintain explicit representations of user intent that evolve with every interaction.

At the cultural layer, celebrate efficiency over engagement. When a product improvement reduces average session length because users accomplish their goals faster, that is a win. When an AI improvement reduces return visits because the first interaction solved the problem, that is a win. This requires a fundamental mindset shift that only leadership can drive.

Trade-offs and Limitations

The shift from engagement to alignment has real costs.

Short-term metrics will look worse. DAU, session length, and other engagement metrics will likely decline as the product becomes more efficient. If your board or investors evaluate you on engagement metrics, you need to reframe the conversation before making the switch.

Alignment is harder to measure than engagement. Clicks and time are cheap to track. Goal completion requires knowing the goal, which requires self-model infrastructure and user trust. The measurement system itself is a significant engineering investment.

Some engagement is genuine value. Not all long sessions indicate misalignment. Users exploring, learning, and discovering are genuinely engaged. The goal is not to minimize all engagement but to distinguish between engagement that serves users and engagement that exploits them.

Alignment requires user honesty. Self-models depend on users being honest about their goals. Users who do not know what they want, or who are exploring without a clear goal, are harder to align with. The system needs to handle ambiguous intent gracefully.

What to Do Next

  1. Measure the gap between engagement and satisfaction. Survey your users on whether they feel the product helps them accomplish their goals efficiently. Compare the results with your engagement metrics. If engagement is high but satisfaction is flat or declining, you have a misalignment problem.

  2. Instrument one alignment metric. Pick the simplest version: after each interaction, ask users whether the outcome was what they needed (thumbs up/down). Track the ratio over time. This single metric will reveal more about product quality than your entire engagement dashboard.

  3. Explore self-model infrastructure for alignment. Measuring alignment at scale requires knowing what each user wants. Self-models make this possible by maintaining explicit, evolving representations of user intent. See how Clarity enables alignment-first product design.


Engagement rewards attention. Alignment rewards understanding. Build products that are measured by the right thing. Start measuring alignment.

References

  1. not a reliable predictor of customer retention
  2. sampling bias, non-response bias, cultural bias, and questionnaire bias
  3. NPS does not correlate with renewal or churn
  4. Nielsen Norman Group has noted
  5. Research confirms

Building AI that needs to understand its users?

Talk to us →
The Clarity Mirror

What did this article change about what you believe?

Select your beliefs

After reading this, which resonate with you?

Stay sharp on AI personalization

Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.

Daily articles on AI-native products. Unsubscribe anytime.

Robert Ta

We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.

Subscribe to Self Aligned →