
Lean has long been established in many companies. The language is familiar, the tools are known, the formats have been introduced. There are stand-ups, shop floor rounds, problem-solving formats, and visualizations. From the outside, this looks like maturity. And yet, operational performance remains fragile in some areas. The same disruption reappears. The same handover remains prone to errors. The same discussion is held again week after week.
Surprisingly often, the reason is not a lack of commitment. It is a lack of precision.
Today, lean without data is often nothing more than well-intentioned process gymnastics. This does not mean that everything has to be digitized or equipped with complex analysis tools. It means something simpler: if you want to improve processes, you need a reliable picture of their actual performance.
This is because modern processes are rarely so simple that bottlenecks can be identified with the naked eye. Delays arise in approvals, in prioritization, in unclear interfaces, in queries, in system breaks. Many of these frictions remain invisible in everyday life until their consequences become noticeably expensive.
This is precisely where lean work, which relies primarily on observation, moderation, and good intentions, reaches its limits. It often remains too close to impressions and too far removed from the actual situation.
This highlights the difference between two types of improvement. The first type is activity-driven. It generates workshops, action plans, and series of appointments. The second type is impact-driven. It begins with process clarity, uses a few relevant key figures, and anchors leadership in a fixed cycle.
This second type does not require a flood of key figures. On the contrary, too many key figures are often a sign of unclear control. A small, clean set is often more effective. In practice, four variables are particularly helpful: throughput time, first pass yield, adherence to deadlines, and escalation rate. Together, they show whether the process is flowing, whether quality is stable in the first run, whether commitments are being met, and how often the control process is deviated from.
It is important that each key figure is clearly defined. When exactly does the lead time begin? When is a process considered complete? What counts as rework? Who owns the figure? What is its source? Without this clarity, measurement quickly becomes a dispute over terminology.
But even clearly defined key figures only reveal their value when used in management. A daily or weekly review must do more than just exchange status updates. It must highlight deviations, sharpen responsibilities, and define next steps. The strength of a good management rhythm lies in the fact that problems are not collected but dealt with.
From this perspective, it becomes clear why HSC does not start with the big solution, but with stabilizing execution. This is not a small claim, but a realistic one. Complexity is not made manageable by painting a rosy picture of the future, but by clarity in the operational core. Only when the process is visible and manageable can improvement be sustainable.
Of course, there are risks. Data can be misused. Key figures can distort behavior. Poor data can create a false sense of security. Those who use figures to assign blame lose the openness that real improvement depends on.
That is why guard rails are needed: a few key figures, clear definitions, short management cycles, and a clear purpose. Not control for control’s sake, but better decisions at the right moment.
In the end, the insight is not technological, but practical in terms of leadership: process improvement today requires more than just adherence to methods. It requires a reliable picture of the situation.
Not everything has to be measured.
But what determines stability, quality, and reliability should be visible.
Then lean becomes not gymnastics, but control.
And control becomes what organizations need most urgently: calm, sustainable execution.

Schreibe einen Kommentar