When AI hinders rather than helps.

Dr. Diana Astashenko-Huber
30 November 2025

And why it is the responsibility of management to prevent this.

Technology becomes a hindrance when governance is lacking

In the spring of 2024, a large manufacturing company introduced an AI-supported reporting system. The idea behind it was to enable faster reporting, better decision-making, and fewer coordination loops. A good idea, right?

Three months later, the effect was the opposite. Decisions took longer, meetings became more chaotic, and teams were more uncertain than before.

The technology itself was not the problem; it worked flawlessly. But the organization did not!

What happened there is not an isolated case. McKinsey (2025) describes this very pattern as one of the biggest misconceptions in AI projects: Technology is overestimated, governance is underestimated.

1. Data flood without decision-making logic: When AI makes transparency difficult

The new AI-supported system can spit out extensive dashboards every morning.

But often it is not specified:

  • What the initial quality of the data is,
  • which key figures are really relevant for decision-making,
  • who interprets them and how,
  • and on what basis (across the company!) decisions are made when data is contradictory or unclear.

If this does not happen, the daily management meeting turns into a jungle of numbers. Everyone sees something different, everyone argues with “their” number, and no one knows what applies.

In this case, the AI, which was supposed to speed things up, actually caused delays due to excessive demands.

According to McKinsey, this pattern can be found in around 60% of AI pilot projects in companies: The introduction may be technically sound. However, they are usually sloppy in terms of organization and strategy. It is strongly reminiscent of a modern high-speed train on a 100-year-old rail network.

2. Lack of trust: illusion of control through AI

Second stumbling block: lack of rules for use.

Who checks the results? Who is allowed to intervene? Who is responsible if something goes wrong?

Companies often initially assume that “the AI calculates correctly.”

But what if discrepancies are noticed? In the worst case, a game of hide-and-seek begins: no one wants to take responsibility for decisions based on AI data whose origin and algorithmic processing are unclear. Meetings became more defensive and less clear.

This is a classic case of illusion of control: the technology conveys security and the organization acts as if it existed. However, responsibility is often not anchored in any of the steps.

3. Leadership must close the gap and take responsibility for AI governance

AI creates new decision-making spaces. But to do so, leadership must define governance with clear rules.

These include:

  • Data quality at input: Which data can we continue to work with in good conscience?
  • Purpose & limits of use: What do we use AI for – and what do we explicitly not use it for? And this is a truly demanding task that requires very thorough preparatory work in terms of content and organization.
  • Responsibility: Who checks the results, who decides, who bears the consequences?
  • Comprehensibility: How do we ensure that decisions remain traceable?

Governance is not a tedious chore here. It is the prerequisite for AI to relieve rather than paralyze.

4. Successful example: AI introduction with clear role definition

A medium-sized energy company chose the following approach when rolling out its AI forecasting tool:

Even before the technical introduction, workshops were held to define:

  • which decisions AI is allowed to prepare,
  • who bears the final responsibility,
  • and how uncertainties are communicated transparently.

The result: faster decisions, fewer justification loops, clearer responsibilities.

The technology was similar in both cases, but the way it was handled was completely different.

Conclusion: Leadership gives AI its direction

AI can be a powerful lever. But only if leadership shapes the playing field on which it is used. Without a clear decision-making architecture, it leads to data noise, mistrust, and paralysis. Only with clear rules and frameworks does it become an accelerator.

👉 “Technology can prepare decisions, but it cannot make them. That remains the task of leadership.”

McKinsey, Superagency in the Workplace (2025)


Practical tip

  • Define before deployment: What can AI decide – and what can't it decide?
  • Assign clear responsibilities for review and final decision-making.
  • Create transparency so that decisions remain traceable.

AI does not accelerate per se. It accelerates where leadership provides guidance and boundaries.

Reflection question:

Which of these fits your organization – and where could a first step be taken?


Further sources:

McKinsey, Superagency in the Workplace (2025):

https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

Insights by Stanford Business: Designing AI That Keeps Human Decision-Makers in Mind

https://www.gsb.stanford.edu/insights/designing-ai-keeps-human-decision-makers-mind

Dr. Diana Astashenko

About me

Dr. Diana Astashenko, Full Stack Consultant. Kennt sich mit dem Frontend (Workshops, Prozessmoderationen, Coachings) ebenso aus wie mit dem Backend (Prozessarchitektur, Workshopdesign, Inhaltliche Weiterentwicklung). Inhaltliche Schwerpunkte: Strategieentwicklung, Strategieumsetzung, Digitale Didaktik und Megatrends. Gelernte Soziologin und Pädagogin. Von Natur aus neugierig auf (fast) alles.
Copyright@2020 - ToChange.de
All Rights Reserved

Get in touch

  • +49-(0)941 600 93 003
  • This email address is being protected from spambots. You need JavaScript enabled to view it.
  • Thomas_Huber

ToChange Gmbh

  • Thomas Huber
  • Traubengasse 6
  • D-93059 Regensburg

Browse our Website