Skip to main content
Artificial Intelligence/ ·5 min read

Not All Context Changes at the Same Speed

AI /Context Engineering /Developer Experience /From My Desk

Most conversations about context storage for AI-assisted development start with tooling. Git or Obsidian? In the repo or separate? Notion or markdown files?

That’s the wrong starting point.

After months of building with AI agents and watching context go stale in every setup I’ve tried, I’ve realized the better question is: how often does this context need to change?

Because a company principle and a sprint decision log have almost nothing in common. They serve different audiences, change at different speeds, and break in different ways when they’re stored wrong. Treating them the same is how you end up with either a 400-line CLAUDE.md that no one maintains, or a perfectly organized knowledge base that your agents never read.

Here’s the framework I’ve landed on.


Context has a clock speed

Not all context is created equal. Some of it is nearly permanent. Some of it changes every week. And the storage strategy should match the rhythm.

I think about it in three tiers.

Context clock speed tiers

Slow-moving context changes quarterly or less. Company mission, product principles, design system foundations, architectural tenets, OKRs at the org level. This stuff gets set once and revisited on a planning cadence. It’s the “why we do things this way” layer.

Medium-cadence context changes monthly or per cycle. Team-level conventions, domain-specific patterns, active feature context, API contracts, integration boundaries. This is the “how we’re building right now” layer. It evolves as the product evolves, but not every day.

Fast-moving context changes daily or per session. Decision logs, spike findings, bug investigation notes, “we tried X and it failed because Y,” session-specific agent instructions. This is the “what just happened” layer. It’s high volume, high value in the moment, and decays quickly if not captured.

The mistake most teams make is storing all three tiers in the same place with the same process. That either makes slow context feel chaotic (too many changes, hard to find the stable truth) or makes fast context feel bureaucratic (who wants to open a PR just to log that a database migration approach didn’t work?).


Slow context wants weight

Company-level context (principles, OKRs, architectural tenets, brand voice, security policies) deserves a deliberate home with a deliberate review process. It changes once or twice a year. When it does change, it matters. People need to notice.

Where it belongs: A dedicated context repository or a well-maintained wiki/knowledge base. Somewhere with clear ownership, version history, and an explicit review cycle. This is the context that gets discussed in planning meetings, not in pull requests.

Why not in your code repo: Because it applies across repos, across teams, sometimes across products. Duplicating your company’s design principles into every microservice’s CLAUDE.md is a maintenance trap. When the principles evolve, you’re updating twelve files instead of one. And you won’t.

Why not in personal notes: Because this isn’t personal context. It’s shared truth. It needs to be discoverable and authoritative. If it lives in one person’s Obsidian vault, it dies when that person goes on vacation.

The maintenance pattern: Quarterly review. Tie it to your planning cadence. When OKRs shift, update the context. When architectural tenets evolve, update and announce. Assign an owner. Treat it like documentation that matters, because for your AI agents, it’s the most foundational layer of understanding.

Maintenance cadence for each layer


Medium context wants proximity

Team-level and domain-level context (coding conventions, feature-specific patterns, API design decisions, domain models, “how we handle X in this service”) lives closer to the work. It changes as the product changes, usually on a sprint or cycle cadence.

Where it belongs: In the code repository, versioned with the code. CLAUDE.md, docs/conventions.md, architectural decision records (ADRs), feature-specific context files. This is the layer where in-repo storage shines.

Why here: Because this context is tightly coupled to a specific codebase. When you refactor from REST to GraphQL in one service, the context about API patterns needs to change in that repo, at that moment. Git gives you that for free. The context change is part of the same PR as the code change. It’s reviewable, attributable, and automatically versioned.

Why not in a central repo: Because it drifts. A central knowledge base that says “Service A uses factory patterns” will be wrong within weeks if Service A’s team moves to a different approach. The team won’t update the central repo. They’ll update their own docs, maybe. Distance from the code creates drift.

Why not in personal notes: Same reason. Team context needs to be shared, discoverable, and maintained by the team. One person’s vault can supplement it, but can’t replace it.

The maintenance pattern: Update context as part of the work. Made an architectural decision? Write the ADR in the same PR. Changed a convention? Update the CLAUDE.md in the same commit. The goal is zero-gap between the decision and the documentation. If updating context feels like a separate task, it won’t happen consistently.


Fast context wants low friction

This is the layer most people ignore. And it’s where the most value leaks out. Every debugging session, every spike, every “here’s what we learned” conversation generates context that’s immediately valuable and rapidly perishable.

Decision logs. Spike findings. Session notes. “We evaluated three approaches and chose B because of latency constraints.” “The legacy API returns inconsistent date formats. Always normalize before processing.” These insights are gold for your next AI agent session, but they’re gone if the only place they exist is someone’s memory.

Where it belongs: The lowest-friction capture tool available. For solo builders, that’s your personal knowledge management system. Obsidian, a synced markdown folder, even Apple Notes. For teams, it’s wherever your team already communicates: a dedicated Slack channel, a running log in the repo, or a shared lightweight doc.

Why low friction matters here: Because the moment you add process to fast context, people stop capturing it. If logging a decision requires opening a PR, writing a commit message, and waiting for CI, no one will do it for a quick finding. The capture mechanism needs to be faster than forgetting.

Why not in the code repo (usually): Because git adds friction. Not a lot, but enough. And because fast context is often messy, provisional, and personal. It hasn’t been validated yet. Putting raw session notes into a versioned codebase clutters history and creates noise for other contributors.

Why personal notes work here: Because this is where personal knowledge management systems earn their keep. Your Obsidian vault, your markdown folder. These are low-friction, richly linked, and personally maintained. You capture fast, link later, and promote the durable insights up to the medium-cadence layer when they’ve proven their worth.

The maintenance pattern: Capture immediately, curate weekly. Write messy notes during the session. Once a week, scan your fast-context captures and ask: is any of this stable enough to move into the repo’s CLAUDE.md or a decision record? If yes, promote it. If not, it stays in your personal layer. Still useful for your own AI sessions, but not yet team knowledge.


The layered model

When you put it together, you get a natural stack:

Context layered model

Layer 1: Organizational context (slow). Central repository or knowledge base. Reviewed quarterly. Owned explicitly. Covers principles, OKRs, security policies, brand, architecture tenets. Every AI agent across the org can reference this layer.

Layer 2: Team and domain context (medium). In-repo markdown files. Updated alongside code changes. Owned by the team. Covers conventions, patterns, ADRs, feature context, integration docs. Each AI agent reads the context for its specific repo.

Layer 3: Working context (fast). Personal knowledge tools or lightweight team logs. Captured in the moment, curated weekly. Owned by individuals. Covers session findings, decision rationale, debugging insights, experimental results. This is the personal edge that makes your AI interactions meaningfully better than a cold start.

Context flows upward through this stack. A fast insight gets captured in your vault. After it proves durable, you promote it into a repo-level decision record. If it reflects a broader pattern, it eventually lands in the organizational layer. Each promotion adds structure and review but also adds reach. More people and more agents benefit from it.

How an insight flows upward through layers


What this looks like at different scales

Solo builder: You mostly live in layers 2 and 3. Your personal vault is your primary context store, and your repo’s CLAUDE.md is your project-specific agent instructions. You’re both the author and the audience, so the promotion path is just you deciding something is stable enough to formalize. Keep it simple. One vault, one CLAUDE.md per project.

Small team (2-5): Layer 2 becomes critical. In-repo context is your shared source of truth. Each team member maintains their own layer 3 and promotes insights upward through PRs. You probably don’t need a formal layer 1 yet. A shared doc or lightweight README in a team repo covers organizational context.

Mid-size team (5-20): You need all three layers. The organizational layer prevents drift across multiple repos and sub-teams. In-repo context stays team-owned. Personal knowledge systems are encouraged but not mandated. The new challenge is the promotion process: how do insights flow from personal notes to repo docs to shared standards?

Enterprise: Add governance to each layer. Layer 1 gets owners, review boards, and change management. Layer 2 gets linting, CI checks for context freshness, maybe automated staleness alerts. Layer 3 might involve structured templates for decision logs. And you’ll need tooling that composes context from multiple layers into a coherent agent prompt. This is where context engineering becomes an organizational capability, not a personal practice.


The principle underneath

Store context where its clock speed matches the storage medium’s natural rhythm.

Slow context needs weight, review, and durability. Give it a stable, governed home. Medium context needs to move with the code. Keep it in the repo. Fast context needs zero friction. Keep it personal and messy, then promote what lasts.

The mistake isn’t choosing the wrong tool. The mistake is storing everything at the same speed. Your company principles don’t need the agility of a scratch pad. Your debugging notes don’t need the ceremony of a reviewed knowledge base.

Match the rhythm, and context maintains itself. Fight the rhythm, and it rots.


This is Part 2. Part 1: Where Should Your Context Live? covers the three main storage approaches and why the right setup is a stack, not a single choice.

Ole Harland

Ole Harland

Product designer in Hamburg with 15+ years designing complex platforms. Currently exploring AI as a design and build tool.