And why it is the responsibility of management to prevent this.
In the spring of 2024, a large manufacturing company introduced an AI-supported reporting system. The idea behind it was to enable faster reporting, better decision-making, and fewer coordination loops. A good idea, right?
Three months later, the effect was the opposite. Decisions took longer, meetings became more chaotic, and teams were more uncertain than before.
The technology itself was not the problem; it worked flawlessly. But the organization did not!
What happened there is not an isolated case. McKinsey (2025) describes this very pattern as one of the biggest misconceptions in AI projects: Technology is overestimated, governance is underestimated.
The new AI-supported system can spit out extensive dashboards every morning.
But often it is not specified:
If this does not happen, the daily management meeting turns into a jungle of numbers. Everyone sees something different, everyone argues with “their” number, and no one knows what applies.
In this case, the AI, which was supposed to speed things up, actually caused delays due to excessive demands.
According to McKinsey, this pattern can be found in around 60% of AI pilot projects in companies: The introduction may be technically sound. However, they are usually sloppy in terms of organization and strategy. It is strongly reminiscent of a modern high-speed train on a 100-year-old rail network.
Second stumbling block: lack of rules for use.
Who checks the results? Who is allowed to intervene? Who is responsible if something goes wrong?
Companies often initially assume that “the AI calculates correctly.”
But what if discrepancies are noticed? In the worst case, a game of hide-and-seek begins: no one wants to take responsibility for decisions based on AI data whose origin and algorithmic processing are unclear. Meetings became more defensive and less clear.
This is a classic case of illusion of control: the technology conveys security and the organization acts as if it existed. However, responsibility is often not anchored in any of the steps.
AI creates new decision-making spaces. But to do so, leadership must define governance with clear rules.
These include:
Governance is not a tedious chore here. It is the prerequisite for AI to relieve rather than paralyze.
A medium-sized energy company chose the following approach when rolling out its AI forecasting tool:
Even before the technical introduction, workshops were held to define:
The result: faster decisions, fewer justification loops, clearer responsibilities.
The technology was similar in both cases, but the way it was handled was completely different.
AI can be a powerful lever. But only if leadership shapes the playing field on which it is used. Without a clear decision-making architecture, it leads to data noise, mistrust, and paralysis. Only with clear rules and frameworks does it become an accelerator.
👉 “Technology can prepare decisions, but it cannot make them. That remains the task of leadership.”
McKinsey, Superagency in the Workplace (2025)
Practical tip
AI does not accelerate per se. It accelerates where leadership provides guidance and boundaries.
Reflection question:
Which of these fits your organization – and where could a first step be taken?
Further sources:
McKinsey, Superagency in the Workplace (2025):
Insights by Stanford Business: Designing AI That Keeps Human Decision-Makers in Mind
https://www.gsb.stanford.edu/insights/designing-ai-keeps-human-decision-makers-mind