“Will our BOMs end up in ChatGPT?” – Why AI anxiety in manufacturing is the real risk

Reading time: approx. 7 minutes

One sentence that says it all

“Will our bills of materials end up in ChatGPT?”

I hear this regularly. In meeting rooms of mid-sized manufacturers, from managing directors who run their company in the third generation. From people who know every customer by name and every machine on the shop floor.

The concern behind it is legitimate. The result – doing nothing at all – is not.

Because while German Mittelstand companies debate whether artificial intelligence poses a risk to their intellectual property, competitors in Eastern Europe, Asia, and increasingly in their own backyard are using exactly this technology to calculate faster, plan more precisely, and produce more cheaply.

The uncomfortable truth: AI is not the biggest risk to your competitive advantage. The decision not to use it is.

Why the fear makes sense

Let me be clear: I’m not an evangelist telling you to throw everything into the cloud tomorrow. I come from industry. I’ve planned production lines, configured bills of materials, and implemented SAP systems long before the word “prompt” had a technical meaning.

I understand why a managing director gets nervous when his sales team suddenly copies customer data into an AI tool. Or when engineering uses ChatGPT to draft technical documents – and nobody knows what happens to the input data.

That nervousness is a sign of responsibility. But it becomes a problem when it turns into paralysis.

The three fears – and what’s really behind them

In my conversations with decision-makers, three core fears keep surfacing:

Fear #1: “Our data is our capital – AI will leak it”

This is the most common concern. And it’s not unfounded. Anyone using the free version of ChatGPT and entering design data, cost calculations, or supplier information needs to know: that data may be used to train the model.

But that’s not an argument against AI. It’s an argument for a clear usage policy.

Here’s the reality: Enterprise solutions exist today – from Microsoft Azure OpenAI to private cloud instances to local models – that never use your data for training. Your BOMs, your formulas, your calculations stay in your environment.

The question is not: “Is AI safe?” The question is: “Have we decided which AI environment is right for us?”

Fear #2: “Our IT guy blocked everything – because of GDPR”

I hear this one a lot too. And again, there’s a legitimate core to it. GDPR sets clear requirements for processing personal data. Anyone deploying AI tools needs to know where data is processed, who has access, and whether a data processing agreement is required.

But – and this is the critical point – GDPR does not prohibit the use of AI. It requires you to manage it responsibly.

A blanket ban on all AI tools is not a compliance strategy. It’s a capitulation. Because what actually happens? Employees use the tools anyway – on personal devices, without guidelines, without oversight. That’s called Shadow IT, and that is the real GDPR risk.

A structured approach looks different:

  • Classification: Which data may go into which environment? (public / internal / confidential / strictly confidential)
  • Tool approval: Which AI tools are approved for which use case?
  • Training: Do your employees know what they can and cannot enter?

Note: I’m not a lawyer or a data protection authority. For the legal assessment of your specific situation, I recommend consulting a specialised data privacy advisor. What I deliver is the operational bridge: how to deploy AI in a way that meets your DPO’s requirements – while still moving forward.

Fear #3: “We haven’t even got our master data under control – how are we supposed to use AI?”

This is my favourite objection. Not because it’s wrong – it’s often disturbingly accurate. But because it gets abused as an excuse.

Yes, your material masters in SAP are probably not perfect. Yes, your supplier data has duplicates. Yes, your BOMs have inconsistencies that nobody has touched since 2014.

But you know what? AI can help you solve exactly that problem.

Generative AI is exceptionally good at recognising patterns in unstructured data, identifying duplicates, and suggesting clean-up actions. Your first AI use case doesn’t have to be a self-driving warehouse. It can be cleaning up your material masters.

It’s not sexy. But it’s the lever that makes everything else possible.

What actually happens when you do nothing

I’m not going to paint a doomsday scenario. I’ll tell you what I observe.

Scenario 1: The competitor in your own market. A mid-sized supplier in southern Germany introduced AI-powered quote calculation. Result: response time to RFQs dropped from 5 days to 8 hours. The competitor in the same segment still takes 5 days. Guess who gets the contract.

Scenario 2: The talent that leaves. Your best people – the production planners, the buyers, the design engineers – can see that other companies let them work with modern tools. AI is not a threat to the next generation. It’s a tool they want to use. If you ban it, you don’t just lose efficiency. You lose talent.

Scenario 3: The slow erosion. No big bang, no disruptive moment. Just a little slower every quarter, a little more expensive, a little less competitive. Until the board asks why margins have been declining for three years.

What I tell my clients

I’m not a consultant who sells you a 200-page strategy. My approach is pragmatic, built on nearly 30 years of industrial experience:

Step 1: Understand where you stand. An AI readiness assessment that doesn’t start with technology, but with your processes and data. How mature is your master data? How digitised are your core processes? Where are the quick wins?

Step 2: Define the rules of the game. An AI usage policy that empowers your people rather than blocking them. Clear rules: these tools yes, those no. This data yes, that data absolutely not.

Step 3: Start small, make it measurable. Implement one concrete use case – ideally one that delivers visible results within 8 weeks. No AI revolution, just a controlled pilot.

Step 4: Scale – or don’t. Not every use case will work. That’s fine. But the organisation has learned how to evaluate, implement, and govern AI projects.

You don’t need a data scientist for this. You need someone who knows how an SAP material master is structured – and at the same time understands what a large language model can do with it.

Conclusion: Control, not fear

The fear of AI in manufacturing is not a sign of backwardness. It’s a sign that business owners want to protect their life’s work.

But protection doesn’t mean standing still. It means defining the rules yourself, instead of waiting for the market to dictate them.

Every quarter you say “let’s wait and see” is a quarter in which your competitor makes their processes faster, cheaper, and more data-driven.

You don’t have to do everything at once. But you have to start.

And the honest answer to the opening question? Your BOMs are probably already in ChatGPT. Not because you decided so, but because your design engineer needed a quick answer last week. The question is not whether AI enters your company. It’s already there. The question is whether you take control.

E-Mail: sven.vollmer@business-quotient.com

Sven Vollmer is “The Industrial Translator.” He bridges the gap between industrial operational reality (SAP, supply chain) and the possibilities of generative AI. His focus is on value-creating applicationsbeyond the hype.

Transparency Note: This article was created with editorial support from AI (Gemini/Claude). The ideas, technical validation, use case selection, and adult supervision were 100% authored by Sven Vollmer.

LinkedIn: www.linkedin.com/in/sven-vollmer-bq

Similar Posts