Skip to main content

Your Copilot Should Know You by Now

GitHub Copilot has seen millions of your keystrokes and still treats you like a stranger. Self-models give developer tools a persistent understanding of how you think and code.

Robert Ta's Self-Model
Robert Ta's Self-Model CEO & Co-Founder 847 beliefs
· · 9 min read

TL;DR

  • AI coding assistants like GitHub Copilot process millions of keystrokes per developer but maintain no persistent model of individual preferences, patterns, or thinking styles
  • The result: suggestions that developers immediately rewrite to match their own conventions, wasting the time the copilot was supposed to save
  • Self-models give copilots a persistent understanding of each developer, turning a stateless autocomplete into a pair programmer that improves with every session

AI coding copilots are everywhere. 84% of developers now use or plan to use AI tools [1] in their workflow, yet these tools maintain no persistent model of individual preferences, patterns, or coding philosophy. Developers spend significant time rewriting suggestions to match their own conventions, negating the productivity gains the tool was supposed to deliver. This post covers what a developer self-model contains, how accept/reject signals become compounding intelligence, and how to move copilots from stateless autocomplete to personalized pair programming.

0%
of Copilot suggestions accepted without edits
0%
of developers who actually trust AI tool accuracy
0%
of developers say AI misses context during refactoring

The Stateless Problem

Every time a developer opens a new session with a coding copilot, the context resets to zero. The copilot has access to the current file, maybe the current project, and a massive language model trained on public code. But it has no memory of:

  • Which of its suggestions the developer consistently accepts vs. rejects
  • Preferred naming conventions beyond what is in the current file
  • Architectural opinions (functional vs. OOP, composition vs. inheritance)
  • Testing philosophy (unit-first, integration-first, TDD, or test-after)
  • Error handling patterns (exceptions, result types, assertions)
  • The types of comments the developer writes and which ones get deleted

These are not minor preferences. They represent a developer’s mental model of how code should be structured. And the copilot starts fresh every session, with none of this accumulated understanding.

The result is predictable. According to GitHub’s enterprise study with Accenture [2], developers accept roughly 30% of Copilot suggestions. The other 70% are either rejected entirely or accepted and immediately edited. Much of that editing is not about correctness. It is about alignment with the developer’s personal coding philosophy. And the 2025 Stack Overflow Developer Survey [3] found that only 29% of developers trust AI tool accuracy, down 11 percentage points from the prior year.

Developer PreferenceWhat Copilot SuggestsWhat Developer Wants
Immutable by defaultvar/let declarationsconst everywhere
Functional patternsClass-based solutionsComposable functions
Early returnsNested if/else blocksGuard clauses
Minimal commentsVerbose JSDoc blocksSelf-documenting code
Result types for errorstry/catch blocksExplicit error handling

What a Developer Self-Model Contains

A developer self-model is not a settings file. It is a structured representation of how a developer thinks about code: beliefs, preferences, and patterns that emerge from thousands of interactions.

Syntactic preferences: The surface-level patterns. const over let. Arrow functions over function declarations. Template literals over string concatenation. These are easy to detect and have immediate impact.

Architectural beliefs: The deeper patterns. Whether the developer prefers composition over inheritance. Whether they believe in thin services or rich domain models. Whether they favor co-located tests or a separate test directory. These beliefs shape the structure of generated code.

Quality beliefs: What the developer considers “good” code. Some developers optimize for readability above all else. Others optimize for performance. Some insist on comprehensive error handling; others prefer letting unexpected errors propagate. These beliefs determine how the copilot should trade off competing concerns. Qodo’s State of AI Code Quality report [4] found that 65% of developers say AI misses context during refactoring, and 61% report the same during code review. The missing context is often the developer’s own quality philosophy.

Domain expertise: What the developer knows and does not know. A senior React developer who is new to Rust benefits from different suggestions than a Rust expert learning React. The copilot should explain less about the familiar domain and more about the new one.

Syntactic Preferences

Surface-level patterns: const vs let, arrow vs function declarations, template literals vs concatenation. Easy to detect with immediate impact on acceptance rate.

Architectural Beliefs

Deeper patterns: composition vs inheritance, thin services vs rich domain models, co-located vs separated tests. These shape the structure of generated code.

Quality Beliefs

What the developer considers “good” code. Readability vs performance. Comprehensive error handling vs letting errors propagate. These determine trade-off decisions.

Domain Expertise

What the developer knows and does not know. A senior React developer new to Rust needs different suggestions than a Rust expert learning React.

Stateless Copilot

  • ×Suggests code based on language model averages
  • ×Same suggestions regardless of developer experience
  • ×Ignores accept/reject patterns from previous sessions
  • ×Developer rewrites 70% of suggestions to match style

Self-Model-Aware Copilot

  • Suggests code aligned with developer beliefs and style
  • Adapts complexity to developer expertise per language
  • Learns from every accept, reject, and edit across sessions
  • Suggestions match developer style from the first keystroke

The Accept/Reject Signal Gold Mine

Every time a developer accepts, rejects, or edits a copilot suggestion, they are expressing a preference. This signal is generated thousands of times per developer per month, and very little of it feeds back into personalization for the individual developer.

copilot-self-model.ts
1// Developer rejects a class-based suggestionPattern signal
2await clarity.observe(devModelId, {Capture the rejection
3 type: 'suggestion_rejected',Signal type
4 content: 'Rejected class component, wrote functional component instead',What developer chose
5 context: 'react-patterns',Domain context
6});
7
8// After 10 similar rejections, self-model builds confidencePattern emerges
9const model = await clarity.getSelfModel(devModelId);Get developer beliefs
10// => belief: 'Prefers functional components over class components'0.91 confidence
11// => belief: 'Favors hooks over lifecycle methods'0.87 confidence
12
13// Next suggestion uses this understandingPersonalized output
14const suggestion = await generateWithSelfModel(devModelId, codeContext);Belief-informed suggestion
15// => Generates functional component with hooks (not class)Matches developer style

The self-model turns the accept/reject stream into compounding intelligence. After a week, the copilot knows syntactic preferences. After a month, it understands architectural patterns. After three months, it generates code that gets accepted without modification on the first try.

Beyond Syntax: Understanding Intent

The deepest level of copilot personalization is not about syntax or patterns. It is about understanding what the developer is trying to accomplish and why.

A developer typing function validate might be starting a form validation function (frontend context), a data validation middleware (backend context), or a schema validation utility (library context). A stateless copilot guesses based on the surrounding code. A self-model-aware copilot also considers: “This developer is working on a backend service, and their self-model shows they believe validation should happen at the boundary layer, not inline. They probably want a middleware function.”

This intent-level understanding transforms the copilot from a code completer into a thinking partner. It does not just finish the line. It understands what is being built and why, and generates code that aligns with the developer’s architectural vision.

The Pair Programmer Analogy

Research on pair programming shows why persistent understanding matters. A meta-analysis published in Information and Software Technology [5] found that pair programming produces higher-quality code, with the biggest gains on complex tasks. The key factor is not typing speed. It is shared mental context: knowing which patterns the other person favors, remembering past decisions, and anticipating needs.

A stateless copilot is a pair programmer with amnesia. Every day is day one. A self-model-aware copilot is a pair programmer who has worked alongside the developer for months. It anticipates needs, matches style, and makes suggestions that feel like extensions of the developer’s own thinking.

The difference in developer experience is qualitative. Using a stateless copilot feels like dictating to a fast typist. Using a self-model copilot should feel like thinking with a collaborator.

Stateless Copilot

A pair programmer with amnesia. Every day is day one. Feels like dictating to a fast typist who never remembers your preferences.

Self-Model Copilot

A pair programmer who has worked alongside you for months. Anticipates needs, matches style. Feels like thinking with a collaborator.

The Industry Is Moving This Direction

GitHub itself has recognized this gap. In late 2025, GitHub launched Copilot Memory [6], a feature that enables Copilot to learn and retain repository-specific patterns like architectural conventions and cross-file dependencies. It is a meaningful step forward. But Copilot Memory is scoped to repositories, not individual developers. It learns that a codebase uses a specific database connection pattern, not that a particular developer prefers functional composition over class hierarchies. And memories expire after 28 days [7].

The missing layer is a persistent, developer-level self-model that captures individual beliefs, preferences, and thinking patterns across repositories and over time.

Team and Org-Level Benefits

Developer self-models benefit more than individual productivity. At the team level, they reveal patterns that inform tooling and process decisions.

If 80% of developers on a team reject class-based suggestions, the team’s style guide should probably be updated. If new hires’ self-models show a steep learning curve on a specific architectural pattern, the onboarding docs need improvement. If a developer’s model shows declining code review engagement, something about the process is not working for them.

Aggregate (anonymized) belief data from developer self-models gives engineering leaders insight into how their teams actually work, not how the official process says they should work.

Implementation: Where to Start

For teams building developer tools with AI assistance, here is how to add self-model awareness incrementally.

Start with rejection tracking. The simplest high-value signal is what the developer rejects. Every rejected suggestion is a direct statement: “This is not how I code.” Log rejections with context (language, file type, pattern category) and feed them to the self-model. Within a week, the model will have high-confidence beliefs about syntactic preferences.

Add edit tracking. When a developer accepts a suggestion and immediately edits it, the diff between the suggestion and the edit is a precision signal. “Copilot suggested let, developer changed to const” is a specific, actionable belief update.

Graduate to architectural signals. Once syntactic preferences are stable, begin tracking higher-level patterns. Does the developer reorganize generated code into smaller functions? Do they extract interfaces from class suggestions? Do they add error handling that the copilot omitted? These signals reveal architectural beliefs.

Surface the model. Give developers a “My Coding Preferences” view where they can see what the self-model has learned and correct inaccuracies. This transparency builds trust and catches misinterpretations early. Some developers will proactively add preferences the model has not yet detected. This matters because, as GitHub’s own productivity research [8] found, developer satisfaction and flow state are as important as raw speed. 73% of developers in their study reported staying in the flow state while using Copilot, and a self-model that reduces unnecessary interruptions from bad suggestions would amplify that effect.

Phase 1: Rejection Tracking

Log what the developer rejects with context (language, file type, pattern category). Within a week, high-confidence beliefs about syntactic preferences emerge.

Phase 2: Edit Tracking

Capture the diff between accepted suggestions and immediate edits. “Suggested let, changed to const” is a specific, actionable belief update.

Phase 3: Architectural Signals

Track higher-level patterns: code reorganization into smaller functions, interface extraction from classes, error handling additions. These reveal architectural beliefs.

Phase 4: Surface the Model

Give developers a “My Coding Preferences” view. Transparency builds trust and catches misinterpretations. Some developers will proactively add preferences.

Trade-offs and Limitations

Initial calibration period. A new self-model needs a meaningful number of suggestion interactions before it has high-confidence beliefs about a developer’s preferences. During this period, the copilot experience is no better than stateless. Clear communication about this calibration period is important to set expectations.

Context switching creates complexity. A developer who writes Go at work and Python for side projects has different preferences in each language. The self-model must be context-aware. Beliefs about code style in Go should not contaminate Python suggestions. Multi-language developers need domain-scoped beliefs.

Preference evolution. Developers change their opinions. Someone who preferred classes two years ago might now prefer functions. The self-model must decay old beliefs and update based on recent signals, or it will anchor to outdated preferences.

Privacy of coding patterns. A developer’s self-model reveals their thinking patterns, expertise level, and productivity habits. This data should be owned by the developer, not the employer. Self-models must never be used as performance evaluation inputs. The Stack Overflow survey data [9] showing that 46% of developers actively distrust AI accuracy suggests that trust and transparency around what is captured will be a decisive factor in adoption.

Calibration Period

A new self-model needs meaningful suggestion interactions before high-confidence beliefs form. The copilot experience starts no better than stateless.

Context Switching

Multi-language developers need domain-scoped beliefs. Go code style should not contaminate Python suggestions. The model must be context-aware.

Preference Evolution

Developers change opinions over time. The self-model must decay old beliefs and update based on recent signals to avoid anchoring to outdated preferences.

Privacy of Patterns

Self-models reveal thinking patterns and productivity habits. This data must be owned by the developer, never used as performance evaluation inputs.

What to Do Next

  1. Track your copilot edit rate: For one week, note how often you modify copilot suggestions to match your style. If editing more than 50% of accepted suggestions, that is work the copilot should be doing.
  2. Document your top 5 coding beliefs: Write down the five patterns most consistently enforced in code reviews. These are the highest-value self-model beliefs, the ones that would save the most time if the copilot already knew them.
  3. Explore self-model-aware development: Try the Clarity API playground to see how developer self-models work. Build a prototype that personalizes code suggestions based on documented preferences.

References

  1. 84% of developers now use or plan to use AI tools
  2. GitHub’s enterprise study with Accenture
  3. 2025 Stack Overflow Developer Survey
  4. Qodo’s State of AI Code Quality report
  5. meta-analysis published in Information and Software Technology
  6. Copilot Memory
  7. expire after 28 days
  8. GitHub’s own productivity research
  9. Stack Overflow survey data

Building AI that needs to understand its users?

Talk to us →
The Clarity Mirror

What did this article change about what you believe?

Select your beliefs

After reading this, which resonate with you?

Stay sharp on AI personalization

Daily insights and research on AI personalization and context management at scale. Read by hundreds of AI builders.

Daily articles on AI-native products. Unsubscribe anytime.

Robert Ta

We build in public. Get Robert's weekly newsletter on building better AI products with Clarity, with a focus on hyper-personalization and digital twin technology. Join 1500+ founders and builders at Self Aligned.

Subscribe to Self Aligned →