One of the most valuable implementation skills in workflow design is not knowing how to automate. It is knowing when to stop.
Careful Selection Criteria
That may sound counterintuitive for a site focused on practical automation, but it is actually central to building good systems. Weak automation programmes usually do not fail because the tools are incapable. They fail because the wrong tasks were selected, the wrong assumptions were made, or the workflow was asked to do something it was never well suited to do.
In engineering contexts, the pressure to automate often comes from visible inefficiency. A process looks slow, repetitive, and frustrating, so the instinct is to target it. But visible repetition is not enough. Before automating, it helps to assess the task across four dimensions:

1. Volume
Does the task occur often enough to justify investment?
A quarterly activity done by one person for ten minutes may not need automation, even if it is annoying. A monthly task affecting multiple people for several hours almost certainly deserves attention.
2. Variability
Is the process consistent, or does it change every time?
Highly variable processes are often poor automation candidates in early stages. If the inputs, business logic, or decision criteria shift constantly, the automation will need frequent rewrites.
3. Consequence
What happens if the automation gets it wrong?
If the output simply prepares a draft table for review, the risk may be manageable. If the output materially affects maintenance strategy, assurance decisions, or external reporting without strong review, the stakes are much higher.
4. Interpretive dependency
How much of the real task depends on tacit human context?
If success depends on hidden assumptions, local knowledge, stakeholder nuance, or physical understanding of a system, the workflow may need to keep humans much closer to the centre.
In practice, I like using a simple screening model:
Good early automation candidate
repetitive,
structured,
low-to-medium consequence,
easy to review,
painful enough to matter.
Poor early automation candidate
highly variable,
consequence-heavy,
context-rich,
hard to review,
only occasionally performed.
This framework becomes especially useful in Microsoft-heavy environments where professionals now have access to increasingly capable assistants such as Copilot. The danger is not only bad code. It is also the ease with which people may automate a draft, a summary, or a classification step without fully understanding how much downstream confidence that output creates.
A practical safeguard is to define the automation’s role before building it. Ask:
Is this workflow generating a draft, a recommendation, or a final answer?
Who reviews it?
What evidence supports the output?
What would trigger manual escalation?
How will errors be detected?
If those questions cannot be answered clearly, the automation is probably premature.
This is why “do not automate yet” should be considered a valid professional conclusion, not a failure. Sometimes the right move is to standardise inputs first. Sometimes it is to simplify a process before touching it. Sometimes it is to build a small preparation script rather than an end-to-end workflow. And sometimes it is to leave the process human because the judgement involved is the point of the work.
That discipline matters commercially as well. If you ever want to package, advise, or sell workflow improvements, credibility will depend not only on what you can automate, but on whether you can identify the wrong automation target before others waste time on it.
Good automation is not just technical execution.
It is process judgement.
And in many organisations, that judgement is far rarer — and more valuable — than the code itself.
Suggested GitHub companion:
workflow assessment checklist
scoring template: volume / variability / consequence / interpretive dependency
sample “automation candidate review” worksheet