[ad_1]
Once-reliable forecasting tools may have a year of bad data that reduces their reliability. Here’s how to avoid a business catastrophe.
A great deal of effort and technology expenditure has gone toward forecasting, and for good reason. An ability to predict, even directionally, key business metrics from production volumes to sales creates a significant competitive asset. In many relatively stable industries, forecasts have been accurate for so long, that complex tasks from production planning to hiring have become routine matters of grabbing the forecast without questioning its accuracy and allocating resources and plans from there. Even something as simple as planning out orders for laptops and end-user computing hardware may be guided by an internal forecast based on historical norms and hiring predictions.
SEE: Electronic Data Disposal Policy (TechRepublic Premium)
When history is a poor teacher
We’ve all seen the impacts of a world gone haywire. Consumers who have tried to buy a house, find a popular new vehicle or even purchase a sheet of plywood have been greeted by scarce supply and extraordinary prices. Much of this fluctuation is explained by the unprecedented impacts of global economic shutdowns, raw material shortages and massive shifts in where people live and work, all trends that were well outside historical norms.
Data scientists term this phenomenon “data drift,” which is essentially a fancy term for the old idea of GIGO: Garbage In, Garbage Out. While the data from the pandemic are not “garbage” in the sense that they are inaccurate or flawed, they are unusual because they are unlikely to indicate future trends. When these data are fed into previously accurate forecasting models, GIGO takes control and produces an inaccurate result.
SEE: Snowflake data warehouse platform: A cheat sheet (free PDF) (TechRepublic)
As a wildly simplified example, consider your webcam purchases in 2019 vs. 2020. With increased remote work, purchases likely skyrocketed, and if you used 2020 purchase patterns to predict future demand, you’d likely end up with cases of unused webcams collecting dust in a closet, perhaps next to boxes of hand sanitizer and safety stocks of toilet paper that were also triggered by the strange days of 2020.
Consider the adverse impact on your business should this be extended to a critical area of your business with long lead times, like production planning. Suppose your forecast produces a result based on 2020 data. In that case, your organization might be unable to fulfill 2021 demand and cede ground to competitors or be left with a crippling amount of inventory that can’t be sold.
Education is the best weapon
This change is intuitively obvious, but the danger lies in the fact that too many organizations regard forecasts as infallible predictions that eliminate the risk of a human making a bad choice. Questioning the data is regarded as a low-grade form of insubordination, yet healthy skepticism of forecasted results is exactly what is needed as companies navigate the post-pandemic world.
The seemingly obvious solution is to rework forecasting algorithms and perhaps embed real-time machine learning or other technical wizardry into your forecasting processes that is less reliant on historical data. However, this can be a costly and time-consuming effort, especially for organizations that don’t have a significant internal data science capability.
SEE: Tech projects for IT leaders: How to build a home lab (TechRepublic)
Identify key processes and the people that interact with those processes, where forecasts may play an outsize role. Areas like sales and marketing, production planning, staffing or future-focused areas are good places to start your investigations. If you have well-documented process maps or training materials, this job will likely be easier. Still, ultimately the goal is to identify the people who use forecast data to perform their jobs and inform their decision making.
Ensure that the consumers of forecast data and their management are aware of the risks of relying on forecasting tools. The results from these tools that don’t pass the sniff test should be investigated and can be legitimately questioned. If possible, create a small team of data and forecasting experts who can help their colleagues rework forecasts, or do ad hoc runs that ignore anomalous data.
In too many companies, “questioning the machine” is forbidden, either explicitly or unofficial implied policies. Empowering employees to use their own experience and expertise will ensure your company is not led astray by forecasts that have been fed an extraordinarily odd years’ worth of data.
Also see
[ad_2]
Source link