Skip to main content

Stated vs Revealed Preferences in AI

Users say they want one thing and do another. AI products that trust stated preferences build for a user who does not exist. Self-models that reconcile stated and revealed preferences build for the real user.

Robert Ta's Self-Model
Robert Ta's Self-Model CEO & Co-Founder 847 beliefs
· · 7 min read

TL;DR

  • Users systematically misreport their own preferences,68% of users who stated a preference for concise output actually chose detailed responses when given the option
  • This stated vs revealed preference gap, well-documented in behavioral economics, is the primary reason AI personalization feels generic even when users have configured their preferences
  • Self-models that treat stated preferences as priors and update based on revealed behavior produce dramatically better personalization than either signal alone

Stated vs revealed preferences diverge systematically in AI products, with 68% of users who say they want concise output actually engaging more deeply with detailed responses. This gap between what users report and what they do is the primary reason personalization feels generic even when users have configured their settings. This post covers why users misreport preferences, how self-models reconcile stated priors with revealed behavior through Bayesian updating, and the enterprise dimension where admin-configured preferences override individual needs.

0%
of users whose stated preference contradicted their revealed behavior
0x
better engagement when revealed preferences drive personalization
0 years
since Samuelson formalized revealed preference theory
0
signals needed to detect stated-revealed divergence

Why Users Misreport Preferences

Users are not lying. They are doing something more subtle and more universal: they are reporting the person they want to be, not the person they are.

This is not a flaw in users. It is a well-documented feature of human cognition.

Aspiration bias. When you ask someone about their preferences, they answer from their aspirational self. I want to be the person who reads concise summaries and moves fast. In practice, I am the person who needs the full explanation because the concise version leaves me with questions.

Context collapse. Preferences are context-dependent, but preference surveys are context-free. I prefer concise responses when I already understand the topic. I prefer detailed responses when I am learning something new. Asking me for a single preference across all contexts forces me to average across contexts, producing a preference that is wrong in every specific context.

Social desirability. In enterprise settings, users report preferences they think are professionally appropriate. I should prefer data-driven, formal responses. I actually prefer conversational, example-rich explanations. The gap between should and actually is a social desirability effect that distorts every enterprise preference survey.

Preference instability. Human preferences change. The response style I preferred last month may not be what I prefer today. Stated preferences are snapshots of a moving target, and they decay in accuracy from the moment they are captured.

Behavioral economists have documented these effects for decades. Daniel Kahneman’s work on the experiencing self vs the remembering self, Dan Ariely’s research on predictably irrational decision-making, and Thaler and Sunstein’s work on choice architecture all converge on the same conclusion: self-reported preferences are systematically biased in predictable ways.

Trusting Stated Preferences

  • ×User sets preference to concise in settings
  • ×System always delivers concise responses
  • ×User engagement drops over time
  • ×Product team blames content quality, not preference accuracy

Reconciling Stated + Revealed

  • User sets preference to concise (noted as prior)
  • System observes user engages deeply with detailed responses
  • Self-model updates: prefers depth when learning, brevity when expert
  • Personalization improves automatically with each interaction

The Architecture of Preference Reconciliation

Building a system that reconciles stated and revealed preferences requires a specific architecture. You cannot bolt this onto a preference store. You need a self-model that treats preferences as beliefs with confidence scores, not as configuration values.

Here is the key insight: a stated preference should be treated as a prior,a starting belief with moderate confidence. Every subsequent interaction either reinforces or updates that belief based on revealed behavior. Over time, the self-model converges on the user’s true preference, which is almost always more nuanced than what they would articulate.

preference-reconciliation.ts
1// Step 1: Capture stated preference as a priorStarting hypothesis, not ground truth
2await clarity.addBelief(userId, {
3 statement: 'User prefers concise responses',
4 confidence: 0.6, // Moderate,it is stated, not observed
5 source: 'user_stated',
6 context: 'response_style'
7});
8
9// Step 2: Observe revealed behaviorEvidence from actual choices
10await clarity.addObservation(userId, {
11 type: 'response_engagement',
12 data: { style: 'detailed', engagement: 'high', context: 'learning' }
13});
14
15// Step 3: Self-model reconciles automaticallyBayesian update
16const preference = await clarity.getBelief(userId, 'response_style');
17// Returns: {
18// statement: 'User prefers detailed responses when learning,
19// concise when reviewing familiar material',
20// confidence: 0.84,
21// observations: 23,
22// source: 'reconciled'
23// }

The self-model does not discard the stated preference. It uses it as a starting point and refines it with evidence. After enough observations, the model develops a nuanced, context-dependent understanding that neither stated nor revealed preferences alone could provide.

The Enterprise Dimension

The stated-revealed gap is amplified in enterprise settings, because you have an additional layer: the preferences stated by administrators on behalf of users.

I worked with an enterprise customer where IT had configured the AI assistant for formal, brief responses. The administrator believed this matched the company culture. When we analyzed actual usage data, users consistently overrode the tone settings, requested elaboration on brief answers, and gave higher satisfaction ratings to casual, detailed responses.

The institutional stated preference was not just wrong,it was systematically wrong in a way that reduced adoption. Users experienced the AI as cold and unhelpful, not because the AI was bad, but because someone else’s stated preferences were overriding their revealed preferences.

Preference SourceAccuracy at Day 1Accuracy at Day 30Update Mechanism
Admin-configured (institutional stated)Low,reflects policy, not usersDegrades,policy drifts from realityManual reconfiguration
User-stated (survey/settings)Moderate,aspirational biasDecays,preferences changeUser must re-enter
Behavioral (revealed only)Low,insufficient dataModerate,needs volume to convergeAutomatic but slow
Self-model (stated prior + revealed updates)Moderate,good starting pointHigh,converges on true preferenceContinuous, automatic

The self-model approach wins at Day 30 because it combines the immediate usefulness of stated preferences with the accuracy of revealed behavior. It starts somewhere reasonable and gets better automatically.

0%
accuracy of reconciled preference models at Day 30
0%
accuracy of stated-only preferences at Day 30
0x
higher satisfaction when revealed preferences inform personalization

The Divergence Detection Pattern

The most valuable signal in the stated-revealed gap is not the gap itself,it is the pattern of divergence. When you can detect systematic preference divergence, you unlock a new type of product intelligence.

For example, if a user consistently states preferences for simplicity but reveals preferences for depth, you might be dealing with a learner,someone in a growth phase who needs more support than they realize. If a user states preferences for formal output but reveals preferences for casual language, you might be dealing with someone navigating organizational culture expectations.

These divergence patterns become inputs to the self-model. They do not just tell you what the user prefers. They tell you something about why the user’s stated and actual preferences differ,which is often more useful for personalization than the preference itself.

Trade-offs

Reconciling stated and revealed preferences introduces real complexity.

Privacy implications. Tracking revealed behavior is more invasive than collecting stated preferences. Users who configure settings expect the system to respect those settings. A system that observes their behavior and overrides their stated preferences can feel manipulative if not done transparently. Transparency is non-negotiable,users must be able to see when and why the system diverged from their stated preference.

Cold start problem. Revealed preferences require interaction data. For new users, you only have stated preferences. The system must work well with stated preferences alone and gracefully transition to reconciled preferences as data accumulates. Do not degrade the new-user experience while waiting for behavioral data.

Preference legitimacy. Sometimes a stated preference is genuinely correct and the revealed behavior is noisy. A user who states they prefer concise responses might engage with a detailed response once because it was unusually good, not because they changed their mind. The reconciliation algorithm needs to distinguish signal from noise.

Organizational politics. In enterprise settings, overriding admin-configured preferences based on individual behavior can create political issues. IT configured the system a certain way for a reason, even if that reason is wrong. Handle this with transparency and escalation, not silent override.

What to Do Next

  1. Measure the gap in your own product. If you have both stated preferences and behavioral data, compare them. Calculate what percentage of users who stated a specific preference actually behave consistently with it. The number will be lower than you expect, and the insight will reshape how you think about personalization.

  2. Treat preferences as beliefs, not configuration. Architecturally, move from a preference store (key-value pairs) to a belief model (statements with confidence scores and observation counts). This single architectural shift enables preference reconciliation.

  3. Build the reconciliation loop. Start with one preference dimension,response style, content depth, or communication formality,and implement the stated-prior plus revealed-update pattern. Clarity provides this architecture out of the box, including divergence detection and transparent user-facing preference models.


Your users are telling you who they want to be, not who they are. Build for the real user.

References

  1. Twilio Segment’s 2024 State of Personalization Report
  2. 2016 survey of 2,000 Americans by Reelgood and Learndipity Data Insights
  3. Product vs. Feature Teams
  4. only 1 in 26 unhappy customers actually complains
  5. not a reliable predictor of customer retention

Building AI that needs to understand its users?

Talk to us →
The Clarity Mirror

What did this article change about what you believe?

Select your beliefs

After reading this, which resonate with you?

Stay sharp on AI personalization

Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.

Daily articles on AI-native products. Unsubscribe anytime.

Robert Ta

We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.

Subscribe to Self Aligned →