
AI can transform operations: better decisions, fewer manual tasks, faster turnaround times. At the same time, many companies are experiencing the opposite: more exceptions, more discussions, less trust.
The reason is rarely “the model.” It is almost always the process.
Why automation increases instability
Automation – whether RPA, workflows, or GenAI – reinforces existing patterns:
- Stable standards become faster and cheaper.
- Unstable processes become unstable faster – just with higher throughput.
When data comes from workarounds, AI learns workarounds. When decisions are not standardized, AI provides “answers” but no controllability.
Process feasibility as a minimum requirement (ISO/PDCA)
The ISO 9001 process approach describes how organizations define processes, control interactions, and improve them using key performance indicators. PDCA is the control and learning system.
HSC is not standard consulting: We often encounter situations where projects escalate or operations become unreliable. In such cases, we first stabilize execution (leadership, standards, visualization) – only then is automation worthwhile.
Typical error patterns in AI operations projects
- Use case overload: too many automations without a clear value stream focus
- Decision gap: AI “decides,” but responsibilities are unclear
- Metric illusion: Output increases, but outcome (quality/predictability) does not
A practical process model (6 steps)
1. Define the process interface
Start/end, input/output, roles, escalation.
2. Minimum standard (SOP light)
Not an 80-page document: 1–2 pages with rules, exceptions, quality gates.
3. Variant management
Define standard flow + make exception process visible.
4. Measurement points & bottlenecks
Measure waiting times, rework, handovers, sources of error.
5. Decision matrix
- AI may decide (only with clear rules + auditability)
- AI recommends (human-in-the-loop)
- AI classifies/triage (low risk)
6. PDCA governance
Regular reviews, root cause analysis, corrective measures, drift/bias checks.
KPIs & guidelines
- Throughput time (median + dispersion)
- First-pass yield / rework
- WIP / backlog
- Human override rate
Risks / trade-offs
- Loss of trust due to black-box automation without comprehensible rules
- Over-standardization: local flexibility decreases – therefore consciously control variants
Conclusion: AI is an accelerator, not a foundation
MIT Sloan describes strategies for scaling AI through clearly defined tasks and process integration – not through hype rollouts.
If you want to make AI successful in operations, you first need process feasibility. Then AI will go from being a toy to a lever.

Schreibe einen Kommentar