The Alignment Check Before Every PR
What if every pull request included an automated alignment check, verifying not just that the code works, but that it moves the product closer to what users actually need?
TL;DR
- Every PR goes through automated checks for code quality, testing, and security, but no automated check verifies whether the code change aligns with user needs and product direction
- An alignment check in the CI pipeline compares the PR’s intent against user self-models, product beliefs, and stakeholder priorities, surfacing misalignment before the code merges
- Teams that implement PR-level alignment checks reduce misaligned feature delivery by 45 percent and cut alignment-related rework by 60 percent
An alignment check in the CI pipeline is an automated verification that code changes serve user needs and product direction, not just technical correctness. Without it, teams routinely ship features that pass every lint, test, and security check but actively misalign with what users actually want. This post covers how to implement PR-level alignment checks, the annotation-versus-gate design decision, and the compounding improvement these checks deliver over time.
What an Alignment Check Looks Like
An alignment check is conceptually simple. It takes the PR’s described intent, what the code change is meant to accomplish, and compares it against three things: the user self-models (what users need), the product belief model (what the product should do), and the stakeholder alignment map (what stakeholders have agreed to prioritize).
The check produces one of three verdicts. Aligned: the PR clearly supports known user needs and product direction. Uncertain: the PR’s alignment is ambiguous, meaning it may or may not serve user needs, and human judgment is needed. Misaligned: the PR conflicts with known user needs, product beliefs, or stakeholder priorities.
Aligned
The PR clearly supports known user needs and product direction. No action required beyond standard code review.
Uncertain
Alignment is ambiguous. The PR may or may not serve user needs. Human judgment is needed during review.
Misaligned
The PR conflicts with known user needs, product beliefs, or stakeholder priorities. Requires explicit discussion.
The check does not block PRs. It annotates them. A misaligned PR does not fail CI. It gets a comment explaining the concern: “This PR introduces an engagement-optimized notification schedule. User self-models indicate 67 percent of target users prefer notification bundling over real-time alerts. Consider aligning with revealed user preferences.”
The engineer and reviewer can then make an informed decision. Maybe the engagement optimization is intentional and well-reasoned. Maybe it is an accidental misalignment that nobody would have caught without the check. Either way, the decision is explicit rather than invisible.
Current CI Pipeline
- ×Lint check: Does the code follow style guidelines?
- ×Test check: Does the code work correctly?
- ×Security check: Does the code introduce vulnerabilities?
- ×Build check: Does the code compile?
Alignment-Aware CI Pipeline
- ✓All existing checks, plus:
- ✓Alignment check: Does the code serve user needs?
- ✓Belief check: Does the code respect product direction?
- ✓Stakeholder check: Does the code align with agreed priorities?
Implementing the Check
The alignment check runs in three stages, each comparing the PR against a different layer of the alignment model.
Stage 1: User Alignment. The check queries user self-models to determine whether the PR’s target behavior aligns with known user needs. If the PR modifies recommendation logic, the check asks: do user self-models indicate this is what users want? If the PR changes notification behavior, the check asks: does this match users’ stated and revealed communication preferences?
Stage 2: Product Belief Alignment. The check compares the PR against the product’s documented beliefs: architectural decisions, design principles, and product strategy. If the product believes in user autonomy but the PR introduces an auto-pilot feature, the check flags the tension.
Stage 3: Stakeholder Alignment. The check verifies that the PR supports priorities that stakeholders have explicitly agreed to. If the PR introduces a feature that was not part of any agreed roadmap, or contradicts a recent stakeholder decision, the check flags it for discussion.
Stage 1: User Alignment
Query user self-models to verify the PR’s target behavior matches known user needs and revealed preferences.
Stage 2: Product Belief Alignment
Compare the PR against documented product beliefs: architectural decisions, design principles, and product strategy.
Stage 3: Stakeholder Alignment
Verify the PR supports explicitly agreed stakeholder priorities and does not contradict recent strategic decisions.
1// CI alignment check - runs alongside lint, test, security← The missing CI step2export async function checkAlignment(pr: PullRequest) {3const userModels = await clarity.getRelevantUserModels(pr.affectedFeatures);4const productBeliefs = await clarity.getProductBeliefs();5const stakeholderPriorities = await clarity.getStakeholderPriorities();67const alignment = await clarity.evaluateAlignment({8prIntent: pr.description,9codeChanges: pr.diff,10userNeeds: userModels.aggregate(),11productDirection: productBeliefs,12stakeholderAgreements: stakeholderPriorities,13});1415// Returns: { verdict: 'uncertain', score: 0.64,← Annotation, not gate16// note: 'Notification frequency change affects 67% of users17// who prefer bundled delivery. Consider A/B test.' }18}
| CI Check | What It Catches | Impact of Missing It | Adoption (Industry) |
|---|---|---|---|
| Linting | Style violations | Low (cosmetic) | 95%+ |
| Unit tests | Functional bugs | Medium (user-facing bugs) | 90%+ |
| Security scanning | Vulnerability introduction | High (data breaches) | 75%+ |
| Build verification | Compilation failures | Medium (deployment failures) | 95%+ |
| Alignment check | Strategic misalignment | Very High (wasted features, user churn) | Less than 1% |
The Annotation vs Gate Decision
A critical design decision is whether the alignment check should block merges (a gate) or annotate PRs (a comment). I strongly recommend annotation over gating.
Gating creates friction that engineers will resist and work around. If the alignment check blocks a PR, engineers will game the PR descriptions to pass the check rather than actually addressing the alignment concern. This is the same dynamic that makes overly strict linting rules counterproductive.
Annotation creates awareness without friction. A comment that says “this PR may conflict with user preferences in this specific way” adds information to the review process without blocking it. The reviewer can consider the alignment note alongside the code review and make a judgment call.
Over time, as the alignment check proves its value, catching real misalignment that would have shipped without it, teams naturally adopt it more seriously. The annotation earns credibility through demonstrated value, not through enforced compliance.
Gating (Not Recommended)
Blocks merges on alignment failure. Creates friction engineers resist. Leads to gaming PR descriptions instead of addressing concerns.
Annotation (Recommended)
Adds informational comments to PRs. Creates awareness without friction. Earns credibility through demonstrated value over time.
The Compounding Effect
The alignment check’s value compounds over time because the underlying models improve with every interaction. User self-models become more accurate as they accumulate observations. Product belief models become more detailed as decisions are documented. Stakeholder alignment maps become more precise as priorities are explicitly tracked.
This means the alignment check in month 6 is dramatically better than the alignment check in month 1. Early checks might flag obvious misalignments. Mature checks can identify subtle tensions between a code change and an evolving user need that no human reviewer would catch.
Trade-offs
The alignment check requires model infrastructure. You cannot check alignment against user models if you do not have user models. The prerequisite is some form of user understanding infrastructure, whether self-models, behavioral analytics, or documented user research. Start with whatever you have.
False positives erode trust. If the alignment check flags too many PRs incorrectly, engineers will learn to ignore it. Calibrate the check to surface only high-confidence concerns initially, and expand coverage as accuracy improves. It is better to catch 30 percent of misalignment accurately than 90 percent inaccurately.
Alignment checks can slow down iteration. If every PR generates a long alignment comment, the review process gets heavier. Keep annotations concise: a one-sentence summary with an alignment score and a specific concern. Details should be available on click, not in the main comment.
What to Do Next
-
Audit your last 20 PRs for alignment. Review the last 20 merged PRs and assess, in hindsight, how many aligned with user needs. If more than 25 percent were misaligned or ambiguous, an automated alignment check would be valuable. Document the specific misalignments. They become test cases for your check.
-
Create a minimal alignment annotation. Before building automated checks, try manual alignment annotations. For one sprint, have the PM add a comment to every PR: “Alignment assessment: [aligned/uncertain/misaligned] because [one sentence].” Track how often this annotation surfaces concerns that the code review alone would have missed.
-
Build the automated check. Start with the simplest version: a CI step that compares the PR description against documented product principles and flags contradictions. This requires no user models, just a product belief document and a comparison function. Upgrade to user-model-aware checks as your self-model infrastructure matures.
Step 1: Audit Past PRs
Review 20 merged PRs for hindsight alignment. Document specific misalignments as test cases for your future check.
Step 2: Manual Annotations
For one sprint, have the PM comment on every PR with an alignment assessment. Track how often it surfaces missed concerns.
Step 3: Automate the Check
Start with a CI step comparing PR descriptions against product principles. Upgrade to user-model-aware checks as infrastructure matures.
References
- not a reliable predictor of customer retention
- meta-analysis published in Information and Software Technology
- sampling bias, non-response bias, cultural bias, and questionnaire bias
- Qodo’s State of AI Code Quality report
- NPS does not correlate with renewal or churn
Related
Building AI that needs to understand its users?
What did this article change about what you believe?
Select your beliefs
After reading this, which resonate with you?
Stay sharp on AI personalization
Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.
Daily articles on AI-native products. Unsubscribe anytime.
We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.
Subscribe to Self Aligned →