Forecast accuracy is not just a KPI. It’s a trust signal between teams. It’s a feedback mechanism for learning and improving. It reflects our intent to serve customers better, optimize inventory, and enable better planning. We measure forecast accuracy: 1. To understand how close we are to reality. 2. To improve our planning systems and processes. 3. To build alignment between cross-functional teams. So, if our “why” is to reflect reality and improve decision-making, then how we measure it—and particularly what sits in the denominator—matters more than we think. --- HOW: The Two Methods and Why They Matter Let’s explore the two common formulas used to calculate forecast accuracy. I. Accuracy = 1 - |Forecast - Actual| / Actual This method compares the error to what actually happened. Best used when actual customer demand is what matters most (as in most FMCG and retail environments). Commonly applied in supply chain or demand planning where the cost of being under-forecasted could mean stockouts and lost sales. Advantages: 1. Grounded in reality. 2. Penalizes under-forecasting (missed demand), which is often more dangerous than over-forecasting. Drawbacks: 1. Can be volatile when actuals are very low. 2. May exaggerate errors for low-volume items. OR II. Accuracy = 1 - |Forecast - Actual| / Forecast This method compares the error to what was planned or committed. Typically used in financial or strategic planning contexts where accountability to a plan is the primary concern. Focuses on how reliable the forecast itself was, regardless of actual market behavior. Advantages: 1. Highlights over-promising and forecast bias. 2. Encourages planners to stick closely to assumptions and commitments. Drawbacks: 1. May punish bold forecasts and encourage conservative planning. 2. Can distort performance perceptions when forecasts are small and actuals are high. --- WHAT: So, Which One Should You Choose? There is no universally right answer—only one that aligns with your purpose. If your goal is to improve supply chain execution, actuals should be the denominator. If your aim is to drive planning discipline, hold teams accountable using forecast as the denominator. Many sophisticated organizations track both metrics—each serves a different stakeholder and decision need. For example: Sales and operations teams might focus on actual-based accuracy for service-level management. Finance and strategy might use forecast-based accuracy to assess budget adherence. --- In the end, metrics are only as good as the intent behind them. So before you decide which formula to use, ask yourself: What is the decision I’m trying to inform? What behavior do I want this metric to reinforce? What truth am I trying to reflect? When you start with why, the right formula often becomes clear. Metrics should guide better decisions—not just better-looking dashboards. Always start with WHY.
Forecast Reliability Assessment Methods
Explore top LinkedIn content from expert professionals.
Summary
Forecast reliability assessment methods help organizations determine how trustworthy their predictions are, using various statistical and judgment-based tools to measure the accuracy and consistency of forecasts. These approaches are crucial for better planning, inventory management, and risk control across projects, sales, and operations.
- Choose suitable metrics: Decide whether to compare forecasts to actual outcomes or planned values, based on what decision or behavior you want the measurement to support.
- Combine risk approaches: Use both historical data-based methods and project-specific risk assessments for a clearer view of potential uncertainties and outcomes.
- Regularly review biases: Check for patterns of overestimating or underestimating demand to improve future predictions and avoid costly errors.
-
-
Pain and Gain Sharing Using Top-Down and Bottom-Up Risk Approaches In the current market, there are a number of approaches for risk quantification, and their reliability depends on the project’s maturity and the team’s depth of knowledge. Among those, Reference Class Forecasting (RCF) and Quantitative Risk Assessment (QRA) are commonly utilized methods that inform the required level of project #contingency. In the UK, a notable study conducted by the Infrastructure and Projects Authority (IPA) compared RCF with QRA in calculating the risk exposure for large infrastructure projects [1]. The study identifies RCF and QRA as top-down and bottom-up risk approaches, respectively. The former employs historical data from past projects within similar categories to inform the current project, while the latter adjusts the range of potential #cost and #schedule outcomes based on the project’s specific characteristics. The choice between these methods is influenced by various factors, including the project’s phase and the availability of detailed information. For example, RCF is advisable at the beginning of a project, where many uncertainties and opportunities exist. Conversely, QRA is more suitable towards the execution phase, as more detailed information about the project becomes available. While traditionally one method is chosen over the other, combining both has proven most valuable, particularly in projects with #pain and #gain share mechanism. For instance, an analysis of RCF outcomes based on data from 316 Turkish public construction projects revealed that cost overruns ranged from -22.94% to 133.48%, with an average overrun of 11.33% [2]. Additionally, it indicated that 81% of projects experienced a maximum overrun of 20%, suggesting that contractors generally underestimate project costs. This implies a high likelihood of initiating a pain sharing mechanism for if the project’s contingency is under 20%. However, this assumption requires validation through project-specific risks and uncertainties assessed by the QRA method. Comparing the results of both methods offers valuable insights for decision-makers, enhancing their understanding of the potential for gain and/or pain sharing, considering both project-specific data and historical information from similar projects. This comparison fosters a win-win scenario, encouraging parties to commit to outcomes that are cost-efficient, fair, realistic, and reliable. As the figure illustrates, the projected pain (cost overrun) differs substantially between the two methods. As either an owner or a contractor, which method do you recommend for quantifying pain? Your insight is much appreciated Reference: [1] https://lnkd.in/g4-RzCUb [2] https://lnkd.in/gK7wCDHR #riskmanagement #decisionmaking #collaboration #Metrolinx Infrastructure Ontario Hatch Ontario Power Generation Network Rail
-
Forecasts in ERP by Dr. Eng. Samir Lotfi Ali The Seminar provides a comprehensive overview of forecasting techniques, evaluation criteria, and features within ERP systems like SAP, Oracle, and Microsoft Dynamics. Below is a summarized breakdown: ✅ Purpose of the Seminar - Identifying necessary features for long- and short-term planning - Evaluating whether ERP applications meet these needs or require add-ons. - Assessing the proper application of forecasting features. ✅ Forecasting Techniques 1. Qualitative Techniques: - Based on intuition and informed opinions; subjective - Useful for medium- to long-term forecasting, especially for new products 2. Quantitative Techniques: - Extrinsic: Relies on external indicators like economic and demographic factors - Intrinsic: Uses historical data (e.g., moving averages, exponential smoothing) to predict future patterns ✅ Forecasting Basics Forecasting involves predicting demand behavior over time using: - Quantitative methods (mathematical formulas) - Qualitative methods (subjective judgment) Factors influencing demand include business conditions, competition, market trends, and promotional plans ✅ Time Frames - Short-to-Medium Range: Daily, weekly, or monthly forecasts up to two years - Long Range: Strategic planning beyond two years ✅ Forecast Evaluation Criteria 1. Accuracy: Measures the closeness of forecasts to actual demand 2. Bias: Indicates whether forecasts consistently overestimate or underestimate demand 3. Metrics include: - Mean Deviation (MD), Mean Absolute Deviation (MAD), Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Percent Error (MPE), and Mean Absolute Percent Error (MAPE). ✅ Demand Classification by Forecastability Products are classified based on two coefficients: 1. Average Demand Interval (ADI): Measures regularity in demand timing 2. Coefficient of Variation squared (CV²): Measures variation in demand quantity ✅ Four classifications: - Smooth demand - Intermittent demand - Erratic demand - Lumpy demand ✅ Forecast Models Various statistical models are discussed, including: - Linear Regression - Moving Average (three-month and five-month) - Exponential Smoothing (with or without seasonality) - Holt’s Exponential Smoothing for trend and seasonality - Croston Model for intermittent demand - Autoregressive Moving Average Model (ARMA) and Autoregressive Integrated Moving Average Model (ARIMA) ✅ Seasonality Calculation Seasonal patterns are analyzed using monthly sales data weighted by relative importance ✅ Criteria for Selecting Forecasting Methods The best forecasting method minimizes bias and error while aligning with management's beliefs about demand patterns. Key selection metrics include cumulative forecast error, MAD, tracking signal, and alignment with accuracy goals #erp #Forecasting #SupplyChainManagemen #DemandPlanning#BusinessForecasting #DataDrivenDecisions #SAP #OracleERP #MicrosoftDynamics