·6 min read
Featured

Your expertise is the product. Agents with context is how you scale it.

'Give it more context' has become one of those pieces of advice that sounds helpful but doesn't actually tell you anything useful or practical - here's what you should actually do.

Share:

Last week I laid out the reasons I think agentic workflows are where things are heading - flexibility, models getting smarter, and compounding improvement based on human feedback. There's also the major caveat: an agent with bad context will confidently do the wrong thing. That's worse than a rigid flow that does the right thing in a more limited way.

So this week, I want to dig into what "context" actually means. Because "give it more context" has become one of those pieces of advice that sounds helpful but doesn't actually tell you anything useful or practical.

The reframe

Here's the thing I think most people get wrong about AI and context: they think of it as a “prompt engineering” problem. How do I phrase this better? What extra information should I paste in?

That's the wrong way to approach it.

The most valuable context you can give an agent isn't just information scraped from the internet. It isn't in a generic playbook or a “killer prompt template” you got from a LinkedIn know-it-all. It's the knowledge that's in your and your team's heads. It's the ten years of experience that taught you which signals actually matter. It's the testing you've done that showed certain things work for certain types of clients but not others. It's the instinct that says "this report looks fine at a high level but something's off" - and you being able to articulate why you think that.

All that knowledge - your expertise, your team's expertise - is what customers actually pay for. And right now, a lot of it is trapped in people's heads. It scales when you hire and people share knowledge/give training. When someone leaves it leaves with them. It's inconsistently applied across the business depending on who's working on what that week.

What agents let you do - if you build them in the right way - is capture that expertise and deploy it at scale. Not replace the people it came from. Scale them. The agent isn't the product. Your knowledge is the product. The agent is the delivery mechanism.

The gap most people don't close

This is also what separates using a valuable agent from someone asking ChatGPT what to do.

Anyone can go to ChatGPT right now and say "review my Google Ads account" and they'll probably get a pretty generic answer. Check your quality scores. Test your ad copy. Optimise your bidding strategy. Fine - but it's advice that could apply to basically any ad account on the planet.

It won't know that you've tested fifty different approaches to PMax campaigns and found that a specific structure works best for e-commerce clients with high SKU counts. It won't know that your team learned the hard way that following the Google recommendations generated inside the platform itself is usually a mistake (the #1 recommendation always being "increase your budget and pay us more" is really the giveaway here). It won't know that this particular client tried running display ads and it tanked because their conversion tracking wasn't clean.

That's the gap. The model is the same. The tools are the same. The difference is the context - and the context comes from you.

Here's what that gap actually looks like:

Average person + ChatGPTYour team + a well-built agent
ProcessGeneric best practiceYour tested, proven playbooks
Client knowledgeWhatever they type inDeep understanding of business, market, goals
HistoryNone - starts fresh every timeEvery test, decision, and lesson learned
Platform awarenessTraining data (months old)Fed with latest changes and updates
OutputTextbook recommendations anyone could getRecommendations that reflect your years of expertise

Same AI, same underlying technology - completely different output, because one version has your context and the other doesn't.

Six types of context

So if context is the differentiator, what actually is good context?

I've found it useful to break it down into six types. That way, it forces you to think about what you actually have, what you're missing, and where the gaps are. “Give it more context” is lazy advice from someone trying to look smart online. Knowing which type of context you need to add is what actually changes your output.

1️⃣ Procedural - “How we do things.” SOPs, playbooks, training materials. Usually lives in someone's head, sometimes an (up to date) document.

2️⃣ General Domain - “What we know.” The wider context you're working in. Your (or a client's) business, market, competitors, margins. The stuff that makes a recommendation relevant vs. generic.

3️⃣ Specific Domain - “What we're doing right now.” Goals, budgets, targets, timelines. Different for different processes/goals and changes more often than the other two.

4️⃣ External - “What's happening around us.” Seasonality, platform changes, market shifts. Your best employees instinctively stay up to date on this stuff. Your agents don't - unless you build it in.

5️⃣ Episodic - “What we've done before and what happened.” Past tests, optimisations, lessons learned. Without this, the agent will suggest things you already tried and rejected. With it, it learns from those results.

6️⃣ Preferences - “How we like it done.” Communication style, reporting guidelines, level of detail. Again, things that are done without too much conscious thought. But if the system doesn't understand and act on them, and the output doesn't match what people want or expect, they'll stop using the tool.

Every single one of those types comes from human expertise. Your processes. Your client knowledge. Your experience. Your awareness of the market. Your track record. Your team's working style. Building an agent without this context is just like using ChatGPT with a different design.

What comes next

I've introduced all six types quickly here because I think the overview is useful. But each one deserves more attention - particularly how hard they are to capture, where they typically live right now (spoiler: mostly in people's heads), and what the path from "tribal knowledge" to "agent-ready context" actually looks like in practice.

That's what I'll cover over the next couple of editions. I'll also get into the practical architecture problem - because even when you have good context, knowing when to load it and how much to include matters more than most people realise. More context doesn't always mean better output.

For now, the takeaway is this: the model is the same for everyone. Your context is what makes it yours. And the most valuable context you have is already in your team - you just need to capture it.