How to mitigate forecast biases and human errors

We cover some of the common forecast biases that drive inaccurate forecasts, and how to avoid the influence of bias.

Forecast biases and human errors

As humans we are riddled with biases, unconscious and conscious ones. Even the most analytical, data-heavy person can’t escape this mental trap.

Biases lead us to base opinions and decisions on our own preconceptions of what we expect the outcome of research or an analysis to be.

Due to that,the results of a forecast aren’t allowed to speak for themselves but instead they act as support to whatever idea the forecast analyst is leaning towards. Biases are not limited to people, they also leak into the models they build.

We’ll cover how to avoid the influence of bias, but first let’s learn about a couple that drive inaccurate forecasts.

" Biases are not limited to people,
they also leak into the models they build. "

Confirmation bias

The tendency to confirm preconceptions by tweaking data and models so that they conform to them. This happens by focusing solely on the information that confirms beliefs rather than focusing on the information that challenges them. Errors are an example of this

Instead of trying to understand why there is an error, it’s easier to look at the results that support the preconception.

The danger of confirmation bias arises when a forecast is influenced by them and used for adjusting, for example, the forecast model. We’re only human and when a lot is at stake it can be easy to fall victim to what we want to see, rather than what it is.

Overfitting bias

This involves an overly complex model that describes noise (randomness) in the dataset rather than the underlying statistical relationship.

Overfitting occurs often and many people (or their forecasting systems) do it unknowingly every day.
This occurs when a statistical model is allowed to fit as many parameters as possible to explain all deviations in the data.

It’s like adding a trend line to a plot in Excel and keep on adding polynomes to it until the trend line follows the historic data perfectly.

With infinite parameters and enough time the model can be suited to almost any dataset. But there’s no guarantee that the model will generate good forecasts or even if it should be used at all.

Conjunction fallacy

The bias from conjunction fallacy is a common reasoning error in which we believe that two events happening in conjunction is more probable than one of those events happening alone. From a forecasting perspective this is often seen when doing scenario analyses with more than one event, resulting in a conditional forecast with low probability.

How to avoid the influence of bias on your forecasts

Implementing a Structured Forecasting Process

A structured forecasting process can significantly mitigate the impact of biases and human errors. The following strategies help ensure that human input is systematically evaluated and monitored, leading to more objective and accurate forecasts.

  1. Define Clear Objectives and Assumptions
    Start by clearly outlining the purpose of your forecast and the assumptions you are making. This will help prevent subjective opinions from influencing the forecasting process. Documenting these aspects ensures transparency and acts as a guide for evaluating any deviations that occur later.
  2. Use a Model Evaluation Framework
    Evaluate forecasting models using standardized metrics. Assess model performance not only on historical data but also on out-of-sample data to guard against overfitting bias. Establish criteria for model selection, emphasizing simplicity and robustness over complexity. Automated monitoring tools can further help track model performance over time.
  3. Incorporate Scenario Analysis
    When conducting scenario analysis, be mindful of conjunction fallacy. Assign probabilities objectively to each scenario, and where possible, rely on probabilistic models that evaluate the likelihood of individual and combined events separately. Avoid creating overly complicated scenarios that dilute forecast accuracy.
  4. Reflect on Past Forecasts
    Maintain a record of past forecasts, including the methodologies and assumptions used. Periodically review these forecasts to understand where bias may have influenced outcomes and use these insights to improve future forecasting processes.
"By weighting a large set of models,
we capture the strengths of each individual model.
This has been proven to be more accurate."

Why not just use one good model?

All forecast models have their advantages. By weighting a large set of models, we capture the strengths of each individual models. This has been proven to be more accurate, according to the latest forecasting research.

Virtual demo

View our click-through demo

Experience the ease and accuracy of Indicio’s automated forecasting platform firsthand. Click to start a virtual demo today and discover how our cutting-edge tools can streamline your decision-making process.