As humans we are riddled with biases, unconscious and conscious ones. Even the most analytical, data-heavy person can’t escape this mental trap.
Biases lead us to base opinions and decisions on our own preconceptions of what we expect the outcome of research or an analysis to be.
Due to that,the results of a forecast aren’t allowed to speak for themselves but instead they act as support to whatever idea the forecast analyst is leaning towards. Biases are not limited to people, they also leak into the models they build.
We’ll cover how to avoid the influence of bias, but first let’s learn about a couple that drive inaccurate forecasts.
" Biases are not limited to people,
they also leak into the models they build. "
The tendency to confirm preconceptions by tweaking data and models so that they conform to them. This happens by focusing solely on the information that confirms beliefs rather than focusing on the information that challenges them. Errors are an example of this
Instead of trying to understand why there is an error, it’s easier to look at the results that support the preconception.
The danger of confirmation bias arises when a forecast is influenced by them and used for adjusting, for example, the forecast model. We’re only human and when a lot is at stake it can be easy to fall victim to what we want to see, rather than what it is.
This involves an overly complex model that describes noise (randomness) in the dataset rather than the underlying statistical relationship.
Overfitting occurs often and many people (or their forecasting systems) do it unknowingly every day.
This occurs when a statistical model is allowed to fit as many parameters as possible to explain all deviations in the data.
It’s like adding a trend line to a plot in Excel and keep on adding polynomes to it until the trend line follows the historic data perfectly.
With infinite parameters and enough time the model can be suited to almost any dataset. But there’s no guarantee that the model will generate good forecasts or even if it should be used at all.
The bias from conjunction fallacy is a common reasoning error in which we believe that two events happening in conjunction is more probable than one of those events happening alone. From a forecasting perspective this is often seen when doing scenario analyses with more than one event, resulting in a conditional forecast with low probability.
A structured forecasting process can significantly mitigate the impact of biases and human errors. The following strategies help ensure that human input is systematically evaluated and monitored, leading to more objective and accurate forecasts.
"By weighting a large set of models,
we capture the strengths of each individual model.
This has been proven to be more accurate."
All forecast models have their advantages. By weighting a large set of models, we capture the strengths of each individual models. This has been proven to be more accurate, according to the latest forecasting research.
Experience the ease and accuracy of Indicio’s automated forecasting platform firsthand. Click to start a virtual demo today and discover how our cutting-edge tools can streamline your decision-making process.