The Supply Chain Data Problem AI Won't Solve for You
Before you invest in algorithms, ask whether your data can support a decision.
A fashion retailer I worked with had an on-time-in-full metric of 13%, which meant seven out of eight products weren’t arriving when they were supposed to. Everyone knew it was a problem, and everyone had a theory about whose fault it was.
Logistics blamed sourcing, sourcing blamed design, design blamed the buyers, and the buyers blamed the suppliers. The arguments went round in circles because nobody had the data to prove anything, and in the absence of proof, the function with the least political capital gets blamed. In this case, logistics took the heat because they were the last link in the chain before the product failed to arrive.
They were also, as it turned out, the most reliable part of the entire process.
Building the picture that didn’t exist
We had to reconstruct the data from scratch, not because the individual functions lacked information, but because nothing was joined up. Each team had their own view of their own slice, and those views didn’t connect to form anything resembling an end-to-end picture.
We mapped the full range plan, the make times, the lead times for fabric orders, transit times from factories through to the UK, and the time from distribution centres into stores and the ecommerce warehouse. We built, in effect, a digital twin of what was actually happening rather than what people assumed was happening.
The answer was not what anyone expected. The root cause was product design not being signed off on time, with designers continuing to tweak things past their deadlines, which pushed back sourcing, which compressed logistics, which meant products landed late. The function that got blamed the least was creating the most delay.
We also found suppliers who never hit their sourcing timelines, which meant we had to build contingency into the plan. And logistics, the usual scapegoat, was consistently hitting its windows.
Within a year, OTIF went from 13% to around 70%.
What visibility actually changes
Here’s what surprised me: once the data was visible, behaviour almost self-corrected.
When designers could see that their sign-off delays cascaded through the entire chain they changed how they worked, and when buyers could see how their order timing affected sourcing slots they adjusted their behaviour accordingly. The arguments stopped because the data didn’t leave room for argument. If your function was the bottleneck, the numbers showed it, and if it wasn’t, you no longer needed to defend yourself.
We implemented a PLM system as a single source of truth, particularly for the development cycle, and we enforced sign-off deadlines based on actual lead times rather than wishful thinking. The data moved from spreadsheets and people’s heads into a system where anyone could see it.
Transparency reduced conflict because it removed opinion from the conversation. Before, people were blaming each other with no proof. Afterwards, arguments simply didn’t stand up if the data proved you wrong.
Where AI fits, and where it doesn’t
This work laid the foundations for using AI later, but here’s what boards and investors need to understand: AI would not have solved this problem, and would likely have made it worse.
A few years ago, data had to be extremely structured for any analytical tool to work with it, and while that’s less true now, with modern AI handling unstructured data reasonably well, there’s a critical distinction between unstructured data and wrong data.
AI can interpret messy formats, but it cannot fix data that is fundamentally incorrect or outdated. And because AI presents its outputs with confidence regardless of input quality, pointing an algorithm at fragmented, stale, or contradictory data gives you confidently wrong answers.
In the fashion retailer scenario, if we’d deployed AI before doing the data reconstruction, we’d have automated the finger-pointing. Each function would have had AI-generated analysis proving their position, based on their partial view of the data, and the conflict would have intensified rather than resolved.
When you delegate decisions to AI, you are delegating on whatever foundations the data provides. If those foundations are poor, you’re building confidence on sand.
What to ask before you invest
For any CEO, COO, or operating partner considering AI investment in supply chain or operations, the question isn’t whether AI can add value, it’s whether your data can support a decision.
Start by understanding what data you have and what you don’t have, mapping it end to end, and accept that you will find gaps, contradictions, and assumptions that have hardened into accepted truth. That’s normal. The danger is not knowing they exist.
Don’t try to build an AI system that looks end to end from day one. Start narrow with use cases where the data is clean enough to trust and the stakes are low enough to learn, running AI and human decisions in parallel so you can check the outputs and build confidence before you loosen the reins.
The edge cases will trip you up: the situations your historical data doesn’t cover, the exceptions that require judgement, the moments where the algorithm is confident but wrong. You need humans in the loop not because AI is unreliable, but because knowing when to override it requires understanding how it reached its conclusion.
Finally, treat data quality as a cross-functional problem. The data created upstream determines the quality of decisions downstream, which means a planning algorithm is only as good as the demand signals feeding it, and a sourcing schedule is only as good as the design sign-off dates it’s built on. If functions work in silos, they optimise their own slice while degrading the whole.
Build a forum where teams can see what others are doing, understand dependencies, and flag where upstream data is creating downstream problems. Collaboration isn’t a cultural nicety here, it’s an operational requirement.
The foundation you can’t skip
AI will change how supply chains operate, I don’t doubt that, but the organisations that capture that value will be the ones that did the unglamorous work first: joining up data, creating visibility, building the muscle for decisions based on evidence rather than opinion.
The 13% to 70% improvement I described happened before any AI was involved, because people could finally see what was actually going on.
That visibility is the foundation. AI is what you build on top of it.

