💡There is a place for human judgment in demand planning. Early in my career I worked in the transportation industry and sat on two industry advisory boards (Tire and Rubber Association of Canada and the RMA), one of whose functions was to forecast vehicle tire consumption. I eventually chaired the Forecast Committee of the RAC (now TRAC) and couldn't understand why we spent so much time debating the minutiae of individual opinions when we had such a robust time series dataset that was begging for autoregression. I even cut one of the quarterly meetings (to the chagrin of delegates who liked escaping from their analyst chairs a few times a year). And fortuitously, during my tenure as Chair, I was right. The human inputs coming from competing members of the tire industry typically added no value, and like many proponents of a math-only approach to forecasting will well tell you, they actually made the forecast worse than if we'd used even the most simplest time series methods. However, experience has shown me (along with decades of research by people like Paul Goodwin, Len Tashman, Robert Fildes) that we ignore the benefit of human oversight and judgment at our peril. Appropriately applied, it shores up the inherent shortcomings of time series and can safeguard against blind spots. 🚗 Vehicle Miles Driven is an important dataset we used to predict tire consumption as it has a direct and causal relationship with the latter. Typically, it was a very stable dataset with strong seasonality and very moderate trend (see graphic below). With COVID-19 lockdowns, however, the situation fundamentally changed. Time series forecasting could never predict the change in vehicle activity as effectively as humans attuned to the latest state and federal mandates could. Lesson: Computers are almost always better than humans at interpreting past data, and humans are not nearly as objective or effective at prediction as they think they are. And when they touch forecasts, they usually make them worse. But understanding where to appropriately apply judgment in a statistically-driven framework can save you from driving off a cliff.
Statistical Forecasting vs Judgmental Forecasting
Explore top LinkedIn content from expert professionals.
Summary
Statistical forecasting relies on computer models and past data to predict future trends, while judgmental forecasting uses expert opinions and real-world context to refine those predictions. Understanding when to blend both methods helps create more accurate and well-rounded forecasts for business planning and demand management.
- Combine approaches: Use both statistical data and human insights to get a balanced forecast that captures trends and accounts for unexpected changes.
- Enrich the numbers: Layer qualitative information from sales, marketing, and market trends onto your baseline predictions to make them more relevant and actionable.
- Identify blind spots: Recognize the limits of relying only on historical data or expert opinions, and adjust your forecasting process to address gaps and improve accuracy.
-
-
Your financial forecast is lying to you. (Save this + Repost for others if it's useful ♻️) It's not your fault. It's your method. After leading FP&A teams for over a decade, I see the same mistake kill budgets again and again: Relying on a single source of truth. The secret isn't finding one 𝘱𝘦𝘳𝘧𝘦𝘤𝘵 technique. It's combining the right ones. Here's my go-to "accuracy booster" combo: 1. 𝗗𝗿𝗶𝘃𝗲𝗿-𝗕𝗮𝘀𝗲𝗱 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝗶𝗻𝗴 You estimate the impact of major planned business changes. ✅ 𝗧𝗵𝗲 𝗚𝗼𝗼𝗱: It accounts for real-world strategy (new products, market expansion, etc). ❌ 𝗧𝗵𝗲 𝗕𝗮𝗱: It can be heavily influenced by human bias. (Hello, happy ears). 2. 𝗦𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰𝗮𝗹 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝗶𝗻𝗴 You use historical data and algorithms to project trends. ✅ 𝗧𝗵𝗲 𝗚𝗼𝗼𝗱: It's pure data. Completely immune to internal politics or bias. ❌ 𝗧𝗵𝗲 𝗕𝗮𝗱: It can overreact to recent blips in data and miss the bigger picture. See the problem? Each one has a blind spot. My solution is brutally simple: Run both methods in parallel. Then take the average of the two. This simple act balances human insight with unbiased data. The result? A forecast you can actually trust. It's how we consistently beat targets. What's the biggest forecasting challenge you face? Let's talk about it in the comments. 👇 -Christian Wattig P.S. This isn't just theory. I've implemented this exact blended approach at several high-growth companies. It just works.
-
I was interviewing a bright candidate for a demand planner role a while back. To gauge his practical thinking, I posed a scenario. "Imagine your system generates a baseline forecast of 10,000 units for a key product next month. The statistical model is sound. What's your next action?" He gave a textbook-perfect answer about reviewing historical trends, model accuracy, and checking for outliers. All crucial steps. I paused. "That's an excellent start. But what if that 10,000, as precise as it looks, is missing the most critical piece of information?" He seemed curious, waiting for the answer. This is a scenario I see play out in many organizations. We invest heavily in sophisticated forecasting systems that are brilliant at analyzing the past. But they often lack forward looking context. A forecast is just a number until we enrich it. Many industry studies highlight that forecasts relying purely on historical data can often miss the mark significantly, sometimes by as much as 30-40% for more volatile items. The plan becomes a mathematical exercise, disconnected from the commercial realities on the ground. This is where the concept of Demand Enrichment becomes invaluable. It is the structured process of layering qualitative intelligence on top of the quantitative baseline. In a previous role, we transformed our forecast accuracy not by buying new software, but by changing our process. Our statistically generated forecast was our starting point, not our destination. We built a simple but disciplined enrichment framework: - Collaborative Input: We worked with the sales team to capture insights on key account promotions, new listings, or potential risks. This wasn't a casual chat; it was a structured input into the plan. - Marketing Integration: The marketing team’s activity calendar was overlaid onto the demand plan. We could now quantify the expected uplift from a specific campaign instead of just hoping for the best. - Market Intelligence: We dedicated a small part of our demand review meeting to discussing competitor activity and market trends, translating these discussions into tangible assumptions in our plan. Suddenly, the number had a narrative. 10,000 units was no longer just a point on a graph. It became "10,000 units, which is composed of 8,500 baseline sales, an anticipated lift of 2,000 units from the 'Summer Sale' campaign, offset by a potential 500 unit loss due to a competitor launching a similar product." This enriched number is something the entire organization can understand, align on, and execute against. It transforms the forecast from a passive prediction into an active planning tool. ----- If you have any questions about Demand and Supply planning, feel free to ask using the links in the Bio. P.S. How does your organization go beyond the numbers to tell the full story of your demand?