56% of CEOs See No ROI from AI — And They’re Seriously Surprised?
An Industrial Translation of Harald Weiss’s Analysis on heise online
Harald Weiss published a sobering analysis on heise online: 56 percent of CEOs report neither revenue nor cost benefits from their AI investments. Only 12 percent claim measurable success on both dimensions. Meanwhile, 67 percent of companies are increasing their GenAI budgets further.
If you work in manufacturing, none of this surprises you. If it does, you haven’t understood the problem.
I’ve been saying this for months: Agentic AI is not a technology problem. It’s a leadership problem. And the heise article delivers the numbers to prove it.
But Weiss argues — by nature of the publication — from an IT architecture perspective. He describes orchestration stacks, state management and inference costs. All correct. But to the COO of a mid-sized manufacturer with 800 employees, it reads like a foreign language.
Here’s the industrial translation.
The Core Misconception: Agentic AI Is Not Automation 2.0
Weiss nails the central point: The expectation around Agentic AI is rooted in decades of experience with classical process automation. A stable process gets digitised, a manual step disappears, the effect is directly measurable.
That’s the mental model of the last twenty years. And it’s wrong for Agentic AI.
I use an analogy: Classical automation is a transfer line. Workpiece in, workpiece out, always the same. Every cycle predictable. Every fault reproducible.
Agentic AI is a workshop with a journeyman who makes his own decisions. He selects the tool, interprets the drawing, decides the sequence. Most of the time, he gets it right. Sometimes he doesn’t. And when he gets it wrong, nobody notices — until the part fails at the customer’s site.
You can set up a transfer line once and walk away. The journeyman, you have to manage. Continuously. That’s the difference most companies haven’t priced in.
Three Hidden Cost Blocks — Translated to the Factory Floor
Weiss identifies three cost drivers that appear in no business case. Let me translate them into the language of manufacturing operations.
1. Data Integration — or: The Master Data Disaster
Weiss writes about “context stitching” — the assembly of inconsistent data from ERP, CRM, MES and proprietary systems. Semantic inconsistencies, missing identifiers, conflicting time references.
On the factory floor, this sounds like:
The material master in SAP says the part weighs 2.4 kg. The logistics manager’s Excel says 2.7 kg. The actual scale at the loading dock reads 3.1 kg — because the packaging was never maintained.
I described exactly this case in an earlier article: an automotive supplier near Stuttgart whose Transport Management System couldn’t calculate optimised routes. Not because the software was flawed — but because nobody knew how much the parts actually weighed.
Agentic AI amplifies this problem exponentially. A classical system throws an error when data is missing. An agent improvises. It takes the next best value, interpolates, estimates — and nobody sees the error until the pallet doesn’t fit on the truck or freight costs explode.
Context stitching isn’t an IT project. It’s an operational overhaul. And whoever skips it before deploying the first agent is building a house on sand.
2. Interface Logic — or: When the Agent Writes Instead of Reads
The most dangerous passage in the heise article is this one: “Since an agent doesn’t just read but also writes — closing tickets, triggering orders, modifying system parameters — transactional integrity becomes a critical factor.”
Let me translate that into supply chain reality:
An AI agent in procurement evaluates a supplier as reliable based on available data and automatically triggers a blanket order call-off for 50,000 parts. What the agent doesn’t know: last week, that supplier extended their payment terms from 30 to 90 days — a classic early warning signal for liquidity problems. The information sits in an email from the head of purchasing, not in a structured data field.
Result: 50,000 parts on order from a supplier who may not be able to deliver in three months. And the beautiful part: the agent did everything “right” — within its decision space.
Weiss describes this technically as idempotency, rollback strategies and race conditions. On the factory floor, it means: Whoever gives an agent write access to SAP without understanding the consequence chain is playing Russian roulette with the supply chain.
This is not a prompt engineering problem. This is a problem of missing industrial experience.
3. Safeguarding Overhead — or: The Myth of Headcount Reduction
The most seductive promise of Agentic AI is personnel savings. Salesforce cut 4,000 positions. The headline works. But Weiss dismantles the myth: “Human-in-the-loop is frequently replaced by human-on-call.”
Translated: The dispatcher who previously processed 200 orders per day manually now handles 20 escalation cases that the agent couldn’t resolve. Those 20 cases are the hardest, the most ambiguous, the most consequential. The dispatcher works less — but under higher pressure, with greater complexity, and with less routine as an anchor.
I know this pattern from the lean manufacturing world. In the early 2000s, we automated production lines in the furniture industry and reduced headcount. What remained were the disruptions. The remaining staff needed more qualification, not less. Companies that had let their best people go found themselves with machines nobody could operate.
With Agentic AI, exactly this pattern is repeating — only faster and with higher stakes.
And then there’s the cost block that appears in nobody’s business case: monitoring, incident handling, compliance reviews, drift detection, version conflicts between model, prompt and tooling. Weiss calls it observability. I call it: the invisible factory behind the AI factory. And that invisible factory needs people, processes and budget — permanently.
Why 56% See No ROI — The Real Root Cause
The numbers from PwC and Deloitte tell a clear story: the majority of companies have systematically underestimated the total cost of Agentic AI. Not because the technology fails. But because the mental model is wrong.
Companies calculate like this:
Cost = licence + inference + integration Benefit = eliminated positions × annual salary
That’s the calculus of a transfer line. Not of an autonomous system.
The correct calculation would be:
Cost = licence + inference + integration + master data remediation + interface safeguarding + monitoring infrastructure + escalation personnel + compliance + continuous training Benefit = decision velocity × decision quality × scalability — minus error cost from hallucination
But only someone who understands both the technology and operational reality can build this equation. Someone who knows what a wrong disposition costs. What a supply failure in week 37 means. What happens when an agent interprets blocked stock as available.
That’s the Business Quotient. And it’s missing in most AI programmes — not on the technical side, but at the decision-making level.
What This Means for SME
The heise article describes a problem that hits mid-sized manufacturers (SME) harder than it hits the corporates.
Large enterprises can afford the safeguarding overhead. They have dedicated MLOps teams, compliance departments, test infrastructure. When an agent misfires, it’s an incident. Not an existential risk.
Mid-sized companies don’t have that buffer. A rogue agent in procurement that triggers five incorrect orders overnight can strain the cash flow of a €50 million business for an entire quarter. And there’s nobody in-house who spots the model drift before the damage materialises.
The answer is not to avoid Agentic AI. The answer is Adult Supervision — experienced people who don’t just admire the engine but know which chassis it belongs in. Who calculate total cost before the budget is approved. Who see the agent not as a replacement for the dispatcher, but as a tool the dispatcher commands.
Conclusion: The ROI Isn’t in the Model — It’s in the Leadership
Harald Weiss concludes that Agentic AI is no self-runner. I go further:
Agentic AI is a leadership task. Not an IT task.
The 56 percent of CEOs who see no ROI didn’t buy the wrong tool. They asked the wrong question. The question was: “What can AI do for us?” The right question would have been: “Are our processes, our data and our people ready for a system that makes autonomous decisions?”
Those who can’t answer that question shouldn’t approve the budget. Those who can usually need less budget than planned — because half the planned agents become redundant once you truly understand the processes.
That’s Industrial Translation. And it’s what manufacturing needs right now — not more agents, but the right people to govern them.
The original article by Harald Weiss appeared on heise online.
