OUR POINT OF VIEW

Where AI Actually Fits in CPG Operations

Our point of view — and what it means for how we work.

Most mid-market CPG companies don't have an AI strategy, and most of the firms selling them one are selling the wrong thing. CEOs are anxious about falling behind, vendors are pitching strategy decks, and consultants are walking out of the building with retainers and no execution. Meanwhile the operations team is still hunting for data in the morning and reconciling spreadsheets in the afternoon.

We've been in those operations seats. We've built the systems. We've watched the AI projects that worked and the ones that didn't. Four things hold up across every engagement we've done — and they shape how ModulusOps approaches every client.

Claim 01

You can't keep up with AI. That's not the goal.

AI is changing fast enough that nobody can stay current. New models, new tools, new frameworks every few weeks — most of which won't matter in six months. The CEO trying to “have an AI strategy” is chasing a target that moves faster than they can read. The anxiety is real, and most of the AI consulting industry is selling solutions to it. The framing is wrong.

Keeping up with AI isn't the work. Specifying the work is. Most operations problems can be described as a set of inputs, a process, and a desired output — but in mid-market companies, that description rarely exists in writing. The buyer is in three people's heads, the steps differ depending on who's doing the work, and the rules are unwritten. No AI tool — current or future — can automate a process that hasn't been specified.

The companies that get value from AI start there. They define the work. They clean the data. They sequence the upgrades. Then the question of which AI tool becomes a small decision instead of a strategic crisis — because most of them work on a well-specified process, and swapping one for another later is straightforward. The companies chasing the latest tool without doing this work spend money and burn time without changing what their team does on Monday morning.

And the goal isn't replacing people. It's the opposite — taking the manual, repetitive work off the people who shouldn't be doing it so they can do the higher-level work they were hired for. AI is leverage for the team you have, not a substitute for it.

Stop trying to keep up. Start specifying the work. The tools will follow.
Claim 02

Operations expertise leads. AI follows.

The companies getting real results from AI didn't start with an AI strategy. They started with a clear operations problem — too much manual reconciliation, forecasts that kept missing, vendor invoices that needed auditing — and used AI as one tool among several to solve it. The AI didn't lead. It was downstream of a well-understood operational problem, and it competed against off-the-shelf software, process redesign, and training before it earned the build.

The firms selling “AI transformation” to mid-market CPG are pitching the inverse. Strategy first, problem later. Frameworks, maturity models, roadmaps, and a $75K engagement that ends with a deck and an “implementation partner” referral. The buyer signs because the CEO is anxious about falling behind. Six months later there's no deployed AI, the operations problems are unchanged, and the team has lost a quarter of momentum.

We start the other direction. The Operations X-Ray identifies where the company is leaking money, time, and energy — and only then asks where AI fits, where it doesn't, and where a different solution is cheaper and faster. Some engagements end up AI-heavy. Some don't. Both are correct outcomes. The diagnosis decides; the toolkit follows.

If a firm is selling you AI before they've understood your operations, you're being sold something that won't deploy.
Claim 03

Most AI failures aren't AI failures.

They're process failures and data failures that AI can't fix. A demand forecasting model trained on incomplete sales data and an unmaintained promo calendar isn't going to outperform the spreadsheet — it's going to be wrong faster. An AI agent that classifies invoice exceptions can't function if the PO data and the receipt data live in different systems and nobody's reconciled them. The model isn't the bottleneck. The data and the process are.

This is why AI consulting engagements often fail at companies that are operationally messy. The AI tooling works. The integration breaks. The data is dirty. The process the AI was supposed to automate turns out to be undefined — three people do it three different ways, and nobody documented the rules. Six months in, the deployment is technically running but no one trusts the output.

Before we recommend AI for any operational function, we score how ripe the underlying process is for automation. The rubric is part of every X-Ray we run — a 1-to-5 score from “Not Ripe” (heavy judgment work, fundamentally human) to “Imminent” (process documented, data clean, decision logic rule-based, value within thirty days). The same engagement might score one function a 4 and another a 2. The 4 gets built. The 2 gets the process and data work first; the AI comes later, when it has a chance of actually working.

The work is almost always upstream of the model. That's where the engagement starts.
Claim 04

Build, buy, or train — not always build.

Some operations problems get solved by training the team to use existing tools well. Some by buying off-the-shelf software that fits. Some by building a custom system. Most consulting firms pretend they can do all three, but their incentives push toward build — building bills more than recommending. Most software vendors push toward buy — their product is the answer regardless of the question. Most training firms push toward train. Each is right sometimes and wrong the rest of the time.

The honest version of this work requires being able to recommend any of the three. Training the buyer to use NetSuite better is sometimes the right answer. Buying a $200/month forecasting tool is sometimes the right answer. Building a custom invoice audit system is sometimes the right answer. The wrong answer is whichever one a firm has structural reason to recommend regardless of the diagnosis.

Every Operations X-Ray includes a Company AI POV section that names which problems fit which path — what to train, what to buy, what to build, and what not to do at all. We get paid for the diagnostic and the engagement that follows; we don't get paid more for recommending more building. That's deliberate. It's also why we won't be the firm pitching every problem as a custom AI agent.

Recommend the cheapest option that works. The relationship is the moat, not the markup.

If this matches how you'd want an operations partner to think, the Operations X-Ray is where the work starts.

Book a Discovery Call