Last week I wrote about the gap between AI adoption and actual results - 88% of organisations using AI, 6% seeing any real impact. I believe a big part of that gap isn't about which model or tool you pick, it's about how you think about the problem.
This week I want to dig into one specific piece of that: the different types of automations, and why the approach matters more than most people realise.
The spectrum
I think there's a useful way to think about where most automation sits right now. It's not a binary choice between "traditional automation" and "AI agents" - there's a whole spectrum, with at least three major categories.
Stage 1: Structured automation. Rules-based workflows. If X happens, do Y. Every path, every condition, every step is defined by you upfront. Zapier, n8n, Make, Python scripts, Google Ads scripts - all of those live here. They're reliable, predictable, and for a lot of tasks (moving data between platforms, triggering alerts, syncing systems) it's the right approach.
Stage 2: LLM-enhanced flows. Same structured workflows, but with an API call to OpenAI, Claude or Gemini dropped in at a specific point. The LLM classifies something, generates some text, or makes a decision that would've been a messy, complicated if/else. The overall flow is still fixed and defined by you - but one or two decision points are now "smarter."
Stage 3: Agentic workflows. The LLM isn't just making decisions at certain points you defined (like a smarter if/else in Stage 2). It's given a goal, some tools and functions to call and works out how to achieve the goal based on the current context. It's updated on the fly, with different context provided by local files, data fetches or natural language prompts.
Most people I talk to are using automation based on stages 1 and 2.
The interesting question is whether there are tasks that keep outgrowing those first two stages - where you're spending more time maintaining and updating the automation than it's saving you.
The distinction that actually matters
Over the last few years I've built loads of workflows with LLM calls at specific points - the stage 2 approach. They work, and in some cases that's genuinely all you need. But there's a real difference between that and an agentic workflow, and it's worth being clear about what that actually means.
In a stage 2 flow, the LLM is still essentially a sophisticated if/else. It's choosing between routes you designed. The rest of the steps still happen in the order you defined. When a new scenario comes up that you didn't anticipate, the whole flow needs updating.
I think of it like giving the AI a multiple-choice question at certain points in your flow. An agentic approach gives it an open-ended brief and lets it figure out the best steps based on the current context.
Something like "review this account's performance against the client's targets, check trends over the last 30 days, flag anything that needs attention" - where the agent decides what data to pull, what to compare, what counts as "needs attention" based on everything it knows at that moment in time. The date, the season, whether there's been a recent sale, what the client's actual goals are, what's been tried before. The agent runs the process slightly differently each time because the context is different each time.
That's a fundamentally different way of approaching "automation". Structured flows have a branch for a scenario or they don't. If something unexpected comes up or anything changes, the flow either breaks or does the wrong thing silently (which is probably worse). An agentic workflow can reason about new situations while it's running - more like a person would (at the risk of anthropomorphising AI).
The bit I think is most underestimated
There's one advantage of agentic workflows that I don't think gets enough attention, and it's not the one most people talk about first.
Everyone thinks about the underlying models improving (and they're improving fast - we seem to get new releases every few months at the moment), and therefore an agentic system based on those models improves as well. A structured flow is as good as it was on the day you built it. An agent running on the same architecture gets smarter when the model underneath improves. I built my first agent on GPT-4 in November 2023 and the same setup running on today's models would be dramatically better (and cheaper) without me changing anything.
But the thing I find more interesting - and more practically useful right now - is that as part of reacting to new context, agentic workflows can learn from human feedback.
A structured flow does exactly the same thing today as it did yesterday, every time, unless you go in and manually change the logic. If it gets something wrong, you fix the flow. If there's a request for a new feature or something changes, the process needs updating.
With an agentic approach, when the agent makes a recommendation and you say "no, that's wrong because of X," that feedback can be captured and stored as context for next time. Over weeks and months, you build up a picture of what matters for this specific process, what's been tried before, what worked, what didn't.
For example - an agent reviews a Google Ads account and recommends pausing some keywords because CPC is high. You look at it and say "those are competitor brand keywords, we're deliberately bidding on those to capture market share, the CPCs are expected to be high." In a structured flow, you'd need to code in an exception for those campaigns and update it every time you launched new competitor keywords. In an agentic setup, that feedback becomes part of the context the agent works with next time - or even a similar flow in a different account. No manual logic updates needed - it just has a new piece of knowledge.
Run that process enough times and something genuinely valuable builds up: an externalised version of everything your team knows about each process they run. The little things that make someone with two years experience better than someone picking processes up fresh. Now that knowledge is captured, written down, and available to the agent every time it runs.
The more context an agent has, the better its output. And one of the richest sources of context is the history of its own previous runs and the human feedback on each one. That compounds over time in a way a fixed automation flow simply can't.
Structured flows are frozen on the day you build them. Agentic workflows get better the more you use them.
Where this leaves you
I'm not saying throw away your n8n flows or scripts. If they're working and they're not eating your time with maintenance, keep running them. Structured automation is brilliant for tasks that are the same every time (like moving data around between systems).
But think about where you're spending the most time updating and maintaining automation. Think about the processes where edge cases keep catching you out, where different clients need different handling, where the logic keeps getting more complicated. Those are the tasks where an agentic approach will give you leverage your current setup can't.
Next week I want to go deeper on the context point - because even the best agentic workflow is only as good as the information it's working with. Give an agent bad context and it'll confidently do the wrong thing, which is arguably worse than a rigid flow that at least does exactly what you told it to.
Your expertise - your processes, your client knowledge, your testing history - is what makes the difference between an agent that gives generic textbook advice and one that actually reflects how your business works.