·11 min read
Featured

Six types of context your agents need - and where you can find each.

The 6 types of context for AI Agents in more detail and how to actually capture and make them usable

Share:

Last week I introduced the six types of context that separate a useful agent from a generic one, and looked at what the gap actually looks like between "anyone with ChatGPT" and "you and your team with a well-built agent."

This week, I'm going to walk through each type in more detail and how to actually capture and make them usable for an agent.

There's a big difference between "we have this knowledge somewhere" or "that one person knows how that process works" and having an agent with access to that information. Almost every business I work with has some version of all six types of context - the challenge is where it lives right now and how far it is from being agent-ready.

The easy wins

The two most common types of context are relatively easy to find because they tend to already exist in some written form, even if it's messy.

Procedural Context - "How we do things."

Most teams have something written down. An onboarding doc. A training guide that's mostly up to date. A campaign setup checklist. The knowledge exists - it's just not always structured in a way or in a format that an agent can use.

The steps from "we have this" to "agent-ready" are mostly reformatting and filling gaps. Take your existing SOPs, clean them up, make them explicit about the decisions that are currently naturally understood by your expert teams. The bits where someone would normally say "oh, and we always do X because of Y" - write those down too. And if things live in formats like Word Docs or powerpoint slides, there are all kinds of "Format X to Markdown" tools available. Markdown is a favourite format of agents, because it is just plain text, but with special characters (like hash, star, hyphen) used to indicate punctuation.

This is the easiest win because you're not having to create any sources from scratch, you're just making existing knowledge agent-readable.

Specific Domain Context - "What we're doing right now."

Goals, budgets, targets, timelines, campaign structure. This stuff changes often, but it's usually documented somewhere - in a brief, a project management tool, a spreadsheet. The challenge isn't that it doesn't exist. It's that it's scattered across different systems and nobody's brought it together in one place.

For most businesses, getting this agent-ready is a similar job to the procedural context - restructuring and reformatting.

The middle ground

Three types that are harder - not because the knowledge doesn't exist, but because it's distributed across people and rarely written down in a structured way.

General Domain Context - "What we know."

Your business, client's businesses, their industries, competitors, market positions, margins. Someone on your team (should!) knows this stuff - and can tell you in thirty seconds whether a client is a challenger brand or a market leader, what their margins look like, what competitive pressures they're facing.

But usually it's not formally documented anywhere. It's the kind of knowledge that accumulates through years of working with clients and in certain industries, and it's based on that experience and intuition rather than from reading documents.

The process here is "deliberate extraction" - structured conversations, client business reviews, or even just asking your team "what would a new hire need to know about this client that isn't currently written down?" Those answers are the extra context that really makes an agent (or employee) valuable. They're also the things that tend to walk out of the door when someone leaves and get lost if they're not written down.

Preference Context - "How we like it done."

Communication style. Reporting format. Level of detail. Risk tolerance. Whether the client wants you to dive deep into everything or just the things that need action.

This is the context type that sounds less important than pure "information" until you get it wrong. Build an agent that produces detailed, technical analysis for a client who wants a high level three-bullet point summary, and they'll stop reading the output. Build one that doesn't provide enough data for a client who wants to understand what's actually happening day to day, and they'll stop trusting it and keep badgering you with questions.

The difficulty here isn't that preferences are complex (although they do change from user to user or client to client) - it's that they don't come to mind when people talk about "context". They're just "how we work." Getting this right usually means observing how your team actually communicates with each client, and providing examples of what "good" looks like in different scenarios (this is known as "few-shot" prompting), instead of trying to describe it in the abstract.

External Context - "What's happening around us."

Seasonality, platform changes, market shifts, competitor moves. The best people stay up to date on this instinctively - they read the blogs, they're in the communities, they notice when Google rolls out a new feature or a previously key feature gets deprecated.

The problem is that this awareness lives in people's daily habits, not in any system. Nobody writes down "Google changed how broad match works last Tuesday" (maybe outside of sharing the news on LinkedIn) - they just adjust their approach. For an agent, that means it's working from assumptions that might be months out of date - the model training happens well before the actual release date.

This is a particularly interesting type to solve because it's not a one-time capture exercise. External context needs ongoing feeds - platform changelogs, industry newsletters, market data. It's less about extraction and more about building a pipeline - updating data sources or giving the agent access to web search tools.

The hard one

Episodic Context - "What we've done before and what happened."

This is the type almost nobody uses with agents right now, and it's possibly the most valuable.

Episodic context is the history of actions, decisions, and outcomes. Without it, agents will likely suggest things you've already tried. They'll repeat the same mistakes. They'll miss patterns that would be obvious to anyone who'd been working on a process for more than a couple of hours.

When did we last make a change to this campaign? What was the result? We tested this strategy last quarter - it didn't work because the revenue tracking wasn't set up right. We tried targeting that audience for this client type before and it underperformed.

This knowledge lives in quarterly/annual review decks, Slack threads, meeting notes, people's memories, maybe comments in a spreadsheet report. It's the stuff that makes your most experienced team members so much more valuable than someone picking things up fresh - they've seen what happens when you do X. They have pattern recognition built on hundreds of past decisions.

For most businesses, getting episodic context is a two-part problem. First, there's the "backdating" - going through old docs and notes and extracting what's already happened. That's hard and time-consuming, but it's a one-time effort.

The second part is more important: building a process to capture this going forward. Every time a human reviews agent output and gives feedback, that interaction is episodic context. "No, we tried that - it didn't work because of X." If that feedback gets captured and stored, the agent has it next time. If it doesn't, you're starting from zero again.

This is the compound interest of agent context. And it's where I see the biggest gap between what people are building now and what's actually possible.

Where most people actually are

Here's what I tend to see when I look at how ready most businesses are to give their agents useful context:

Context TypeWhere it usually livesHow far from agent-ready?
Procedural ContextGoogle Docs, training materials, someone's headClose - needs structuring, not creating
Specific Domain ContextBriefs, spreadsheets, Project Management toolsClose - needs consolidating from multiple sources
General Domain ContextEmployees heads, scattered notesMedium - needs deliberate extraction
Preference ContextImplicit in how people workMedium - needs observing and documenting
External ContextUpdate emails, trade press, social mediaMedium-hard - needs an ongoing pipeline, not a one-time capture
Episodic ContextSlides, meeting notes, memoryHard - needs both historical extraction and a future capture mechanism

There's a clear pattern: the types that are easiest to capture are the ones everyone already has some version of. The context that would make the biggest difference is the hardest to get.

That's not a coincidence. It's also not a reason to wait. Starting with the easy wins - getting your processes and campaign context structured - is still a meaningful step. But the real competitive advantage is in the bottom half of that table.

How this maps to cognitive science

A lot of my thinking around context has been informed by a blog post from Harrison Chase (CEO of LangChain - one of the very first AI Agent Frameworks) on agent memory that maps more explicitly to cognitive science research. It groups memory into three groups - procedural memory (how to do things), semantic memory (facts and knowledge), and episodic memory (past experiences). The six types of context I use can also map directly onto that framework:

Your Context TypeCognitive Memory TypeHow Stable?
Procedural ContextProceduralVery stable - update occasionally
General Domain ContextSemanticStable - updates quarterly-ish
Specific Domain ContextSemantic (scoped)Changes per campaign/project
External ContextSemantic (dynamic)Highly dynamic - needs regular refresh
Episodic ContextEpisodicGrows over time - the compound interest type
Preference ContextProcedural (personal)Stable-ish - evolves slowly

I share this, because at this stage, we're building agents designed to scale how your best employees and team members work - we're essentially trying to codify and give an agent the same underlying types of knowledge that humans have.

What's next

Now that we've mapped the types and where they live, there's an obvious next question: What do I do with it, just load everything in and hope for the best?

No. That's actually where a lot of people go wrong. Context windows (how much data a Large Language Model can process in one go) are limited, more context isn't always better, and the decision of when to load which context really matters. It's like briefing someone by giving them the contents of an entire library - technically they have all the information, but practically they can't find any specific bit you might ask them about. I'll go into that properly next week.

For now, one practical takeaway: don't start with thinking about context, or even agents general. Start with a use case.

Find one specific task where your team spends real time and first, ensure that an agent is the best solution to help them with it (as I've said in previous posts, that's not always the case). A recurring review. A reporting process. A weekly check that your best person does brilliantly but nobody else does quite does as well.

Then work backwards: what would someone need to know to do this well? That question will tell you exactly which context types matter, in what order, for that specific problem.

This is important because "let's capture all our context" falls into the AI-solution-looking-for-a-problem trap. It's a massive documentation exercise with no obvious, measurable payoff. But "this task takes four hours every week and our best person does it differently from everyone else" - that's a problem worth solving. The context work flows naturally from there, scoped tightly to something concrete. You'll know exactly what to capture because you know exactly what the agent needs to do, and the information needed to do it.