Last week I wrote about the spectrum of automation - from structured flows to agentic processes, and why the distinction is becoming more important as AI usage becomes more widespread. I also touched on what I think is the most underestimated advantage of using agents - that agentic workflows can "learn" from human feedback and accumulate context over time.
This week I want to go into a bit more detail. There are three reasons I think agentic approaches are the way things will/are moving for certain types of work and I want to talk through each more thoroughly.
Caveat
Before we start - I'm not saying "agentic is always better." Structured automation is perfect for tasks that remain consistent over time like large scale raw data transfers. We don't need to reinvent the wheel for absolutely everything.
The three advantages below apply to tasks that typically need some level of human judgement today. The ones that have more nuance than "is X higher or lower than Y". Things you need to spend time thinking about - but you haven't got enough hours in the day to think about all the time so often get shoehorned into a check vs a fixed threshold.
1. Flexibility: update a brief, not code
This is the most useful, practical advantage of using agents right now. The fact that anyone, regardless of technical knowledge or skill, can "program" them with natural language.
With a structured flow, every change to how it works means editing the workflow - whether that's in a tool like Zapier/Make.com/n8n or actually in code. Need to onboard a new client with different targets? Edit the thresholds. A platform changes their API structure? Rewire the connection. A client shifts their goals from lead volume to lead quality? Update the logic to prioritise different metrics.
For anyone managing multiple processes, this adds up fast. Every customer has slightly different goals, slightly different definitions of "good performance," slightly different constraints. In a structured flow, those differences mean more branches, more conditions, more maintenance. The more variables you add, the more complex and hard to manage the flow gets.
With an agentic approach, a lot of those updates now happen in natural language. Instead of editing a flowchart, you're updating a brief. "This customer has shifted focus from lead volume to lead quality. Prioritise cost per qualified lead over raw CPA." The agent reads that context and adjusts how it approaches the task and can even write its own tools on the fly if an API changes. You don't need to touch any logic or connections.
The practical difference: when something changes (and things always change), you're spending minutes updating context instead of hours reprogramming a workflow. For anyone managing dozens of customers and processes, that's a significant shift.
2. Technical gains: model improvements compound
If you build a complex structured automation today - whether that's a detailed n8n flow, a set of Python scripts, or a chain of API calls - it's as good as it'll ever be on the day you ship it. From that point on, it only gets more outdated as the world changes around it. Platforms update their APIs, client requirements shift, new edge cases appear. The flow doesn't adapt to any of this. You maintain it, or it degrades. I used to build a lot of automations like this in previous jobs and ended up spending as much or more time updating existing tools than building new ones.
An agentic system works the other way around. Because the reasoning and decision-making is being done by the underlying model, every time that model gets a meaningful upgrade, your agent gets "smarter" without you changing anything.
And the models are improving fast. Just look at the general trajectory over the last 18 months - the coding ability, the benchmarks, the quality of image and video creation, the ability to follow complex multi-step instructions. New model releases are coming every few months (or weeks at the moment!) from multiple providers, and each generation is a step up. When I take a step back and think about how much more efficient I am now than 1, 2, 3 years ago, it really hits home how much LLMs have improved.
I built my first Google Ads agent in November 2023, using OpenAI's GPT-4 model. That was state of the art at the time. But using the same approach to breaking down the users ask, the same tool access, the same type of context - running on today's models would produce dramatically better output, because the models have got better at reasoning about the same data and tools.
That's fundamentally different from structured automation. A flow is a depreciating asset - it's at peak value on day one and declines from there unless you actively maintain it. An agentic system is the opposite - it improves as the technology underneath it improves.
The practical implication: time spent building good agentic architecture (the context management, the tool access, the review workflows) is time that compounds. Time spent building complex rigid flows is time that needs ongoing maintenance just to stay at the level it's at now.
3. Learning: feedback compounds
I covered this in more detail last week, but the short version: a structured flow does the same thing today as it did yesterday, every time, forever, unless you manually change the logic.
An agentic workflow can capture and act on feedback. When the agent makes a recommendation and you say "no, that's wrong because of X," that correction becomes context for the next run. Over weeks and months, the system builds up a picture of what matters for each specific process - what's been tried, what worked, what didn't, what to watch out for. This is the same way we think about repetitive tasks in our day to day work now.
The Google Search Ads keyword example from last week is a good illustration - the agent recommends pausing high-CPC (Cost per Click) keywords, you explain they're deliberately targeted competitor terms and so are expected to be so, and that knowledge becomes part of how the agent approaches that case (and similar ones) in future. No manual logic or "rule exception" updates needed.
What builds up over time is genuinely valuable: an externalised version of the institutional knowledge that normally lives in people's heads. The stuff that makes someone with two years of experience better than someone picking up tasks fresh.
What this looks like in practice
To make all three tangible, let's take that same task again: "review keyword performance and take action on what you find."
Structured automation handles this with rules: IF cost per click is above £10 AND the keyword has more than 20 clicks, pause it. This works well for extreme cases based on pure maths. But it only handles the cases you wrote rules for. Different customers/campaigns/goals means editing every threshold. There's no underlying tech to improve anything. Every change to the logic means a manual update.
An LLM-enhanced flow improves on this by sending keyword data to GPT/Claude/Gemini and having it classify each keyword as "good / needs attention / watch." This adds smarter decision-making, and some natural language analysis of what a "relevant" keyword is at that specific point. But the rest of the flow is still fixed - you've still defined what happens after each classification, and you're still editing the flow when things change. When the models improve, that one classification step gets better, but nothing else does.
An agentic approach looks different: "Review this account's keywords against the client's targets. Check trends over the last 30–90 days. Flag anything that needs attention and explain why." The agent decides what data to pull, what comparisons matter, what "needs attention" means based on everything it knows right now - the client's goals, the season, what's been tried before, what the actual performance trends look like.
A new client? Update the client brief in natural language. Something comes up you didn't anticipate? The agent reasons about it at runtime using whatever context it has. When the models improve? The entire workflow gets smarter - not just one decision point. And if the agent recommends something you disagree with, that feedback becomes context for next time.
Structured automation requires much more ongoing maintenance - every change means new logic. LLM-enhanced flows are in the middle - the flow structure still needs updating when things change. Agentic workflows are lower maintenance - you update context (by running and responding to the agent), not workflow logic.
The caveat
As powerful as they are, agentic processes also can't be "set and forget". Agents don't always get it right, even as models improve. We will likely never completely stop LLMs from "hallucinating" (making things up) due to the way they actually work. And it especially doesn't mean you can skip the hard work of working out the best information and context for the agent to work with.
An agent with bad context will confidently do the wrong thing. That's potentially worse than a structured flow that does exactly what you programmed it to, even if what you programmed it to do is limited.
The flexibility advantage only works if the context you're providing is actually valuable. The continuous learning advantage only works if someone is actively reviewing the output and providing useful feedback. The biggest mistake I see lots of people make is that they don't interrogate AI outputs because it's detailed, done at impressive speed and looks right. The model improvement advantage is only relevant if the agent is actually doing valuable work in the first place.
That's what I think is the single most important topic in AI and agent building at the moment, and what I'll write more about next week: context.
That's the idea that the model is the same for everyone, but what you give it to work with is what makes the difference. I'll talk about what context actually means in practice, where it comes from, and why your team's expertise is the thing that turns a generic AI tool into something genuinely valuable.