Context Is the New Competitive Advantage
Something shifted in the last few weeks. Not in model capability. In what people are building on top of it.
I have been watching this from two angles. During the day, I work on enterprise software for field service and logistics. In the evenings, I build a side project with Claude Code and an Obsidian vault full of product knowledge. Both experiences point in the same direction, and it is not where most of the AI conversation is focused.
The discourse is still about models. Which one is smarter. Which benchmark went up. Whether Opus 4.6 or GPT Codex or Gemini 3 will win the next eval. I get it. The models are impressive. But I keep noticing that the gap between “this model can theoretically do this” and “this model does this well for my specific situation” is not a model problem. It is a context problem.
What Linear just said out loud
Linear published a piece last week titled “Issue tracking is dead.” The argument is worth paying attention to, not because issue tracking is literally dead, but because of what they are replacing it with.
Their framing: issue tracking was built for handoffs. A PM scoped the work. An engineer picked it up later. The system existed to bridge the gap between those two people. That gap required prioritization, negotiation, status updates, and ceremony. Over time, the ceremony started to feel like sophistication. More process looked like more maturity.
Linear is now positioning itself as something different. Not an issue tracker. A shared product system that turns context into execution. It holds feedback, intent, decisions, plans, and code. Humans and agents work from the same base.
The numbers behind this are real. Coding agents are installed in over 75% of Linear’s enterprise workspaces. Agents authored nearly 25% of new issues in the last three months. The volume of agent-completed work grew 5x in one quarter.
The interesting part is not that agents are doing more work. It is that agents become useful through context. Linear’s own words: “Agents are not mind readers. They become useful through context.” Customer feedback, internal ideas, strategic direction, decisions, code. All of this needs to live in a system that both humans and agents can work from.
That is a product bet on context infrastructure, not model capability.
The pattern in vertical AI agents
The same pattern shows up in vertical AI products across industries. I see it in logistics, where startups are building AI agents for freight forwarding. The model underneath is a frontier model, same one anyone can use through an API. The value is not the model. The value is that the agent connects to the freight forwarder’s email, their TMS, their carrier portals, and their proprietary systems. It understands what an exception looks like for LCL versus FCL. It knows which fields in the booking system matter. It has the operational context that a generic assistant will never have.
Strip out that context layer and you have a chatbot that can write nice sentences about logistics but cannot process a single rate request.
This is the pattern everywhere vertical agents are working. The model is commodity. The context is the product.
Five layers deep
The Product Masterclass recently published a framework for this that I think nails the structure. They call it context engineering for product teams, and break it into five layers.
Strategy context. Where you are headed and why. Mission, strategic bets, positioning, business model constraints. Without this, AI helps you build the right feature the wrong way.
Discovery context. Everything you know about users and their problems. Interview findings, validated problems versus assumptions, jobs to be done. Without this, AI writes feature descriptions. With it, every requirement traces back to evidence.
Roadmap context. What you are building, what you chose not to build, and the reasoning behind both. This is the layer that lets an agent draft a stakeholder response with actual evidence instead of generic diplomacy.
Technical context. Architecture, data models, API specs, the backlog. This is the layer most people skip. It is also the layer that turns technically naive requirements into ones engineers respect.
Design context. Screenshots, flows, the design system. When an agent can see what your product actually looks like, it writes requirements that fit the real interface instead of describing abstract features.
The core point: you are not bad at prompting. Your AI just has amnesia. Every session starts from zero. The fix is not better prompts. It is building a context system that persists.
I think they are right. And I think the same principle applies well beyond product management.
Building a second brain that agents can read
I have been running a version of this for my own work, without initially thinking of it as context engineering.
My Obsidian vault holds everything: product decisions, design tokens, copy guidelines, data models, architectural context, competitive research, case study drafts, and writing voice guidelines. It is organized using the PARA method. Areas, projects, resources, archives. Everything has a place. Everything is findable.
When I work with Claude Code on my side project, the agents pull from this vault as their single source of truth. A UX Writer agent checks the voice guide before editing a label. A Security Engineer agent reviews threat models against documented architecture decisions. A Product Manager agent checks scope against the roadmap rationale.
The CLAUDE.md file tells agents what conventions to follow and which docs to check. A memory file logs what went wrong and what tests were added. Specialized agent definitions live as markdown files with explicit principles and structured output formats.
This is not a chatbot. It is a system where the AI has persistent access to my accumulated product knowledge. The vault is the context layer. The agents are useful because they are grounded in decisions I already made, constraints I already documented, and patterns I already established.
The gap between “ask AI a question and get a generic answer” and “ask AI a question and get an answer that fits your specific product, your specific architecture, your specific voice” is entirely a context gap. The model capability is the same in both cases. The context is what changes the output from generic to useful.
This is headed somewhere specific. I keep thinking of it as building toward a personal JARVIS. Not in the sci-fi sense. In the practical sense of an AI system that knows your work, your decisions, your constraints, and your taste well enough to be a genuine collaborator instead of a well-spoken stranger.
What this means for software products
If context is the moat, then the competitive dynamics of software change.
A generic project management tool is easy to replace. Anyone can build a kanban board. The AI can help you code one in an afternoon. But a system that holds your product context, your team’s decisions, your customer feedback, your code intelligence, and makes all of that available to agents? That is a different kind of lock-in. Not the annoying kind where switching costs are artificial. The useful kind where the system gets better because it knows more about your situation.
Linear is betting on this explicitly. They are not competing on features. They are competing on context density. The more your product knowledge lives inside Linear, the more useful their agents become, the harder it is to leave. Not because of switching costs. Because the context is the value.
The same logic applies to every vertical agent. The freight forwarding AI is not defensible because of the model. It is defensible because of the integrations, the domain knowledge, the operational context that took years to encode. A competitor with a better model but no context layer loses every time.
And it applies to individuals too. My Obsidian vault is my context moat. The structured knowledge I have built over months makes every AI interaction more useful than it would be for someone starting from a blank prompt. That advantage compounds. Every decision I document, every learning I log, every convention I write down makes the next interaction sharper.
The shift
The AI conversation is going to move from “which model is best” to “which context system is best.” It is already happening, but most people have not named it yet.
Models will keep improving. They will get faster, cheaper, smarter. That is table stakes. The differentiator is not whether your AI can write code or draft a stakeholder email. Every model can do that now. The differentiator is whether your AI knows enough about your specific situation to write code that fits your architecture and draft an email that references the actual decision history.
Context engineering is not a technical detail. It is a strategic position. The teams, products, and individuals who invest in building and maintaining rich context systems will get compounding returns from AI. Everyone else will keep getting generic output and wondering why the technology does not live up to the hype.
The model is the engine. The context is the fuel. Right now, most people have a very powerful engine and an almost empty tank.
Linear’s “Issue tracking is dead” announcement: linear.app/next. The five-layer context engineering framework: product-masterclass.com. Anthropic’s context engineering guidance: anthropic.com/engineering. Manus on context engineering for agents: manus.im/blog.