Are you using these costly
and inadequate forecasting methods?

Explore the common mistakes and fallacies in forecasting, and why Excel can only get you this far.

A review of existing forecasting processes

We reviewed the forecasting process at a large industrial firm, and benchmarked their historical forecast accuracy towards the top 10 international forecasting experts. By the end of the project, we included a statistical model and were stunned by the result: the statistical model outperformed all of them. Where did the accuracy deviations originate from, and how could they be avoided?

After discussions with forecasters in the industry and the banking sector
We found that the companies that used the qualitative method relied on individual decision-making. This came as a surprise considering how this method is highly exposed to biases in decision-making, such as the conjunction fallacy. Conducting the forecast as a part of a group decision would reduce the impact of these biases and help reduce individual prejudices.

The case for using multiple forecasting methods
Apart from using group-decisions, we saw that another way to decrease bias is to use multiple forecasting methods such as time-series analysis because it enables forecasts to confirm each other. Based on the learnings we acquired, we built Indicio. But the aforementioned issues are just part of the problem. If we zoom out and look at the bigger picture, we find numerous flaws in how forecasting is conducted today. 

Not excelling with Excel

As far as statistical models and forecasting are concerned, Excel can only get a statistician so far.

Using Excel for forecasting indirectly assumes that spreadsheets are used as databases; asking that duty from Excel is one of the most notorious reasons for business inefficiencies. Identifying leading indicators requires using correlation in order to find causal relationships. Doing this in Excel can give room for some funny correlations which is a forecaster’s nightmare.

A funny/spurious correlation occurs when the statistical model finds a causal relationship between two seemingly disconnected correlations that have nothing to do with one another. As a result, receiving funny correlations is nothing short of a disaster if you are aiming for an accurate forecast.

Examples of funny correlations

The impact of data science immaturity on forecast accuracy

When organizations started collecting data around 60 years ago, that data was cleaned and presented in standard reports that eventually were digitized into what we today call ERP (Enterprise Resource Planning) and BI (Business Intelligence) software.

The first acting as the collector of data and the latter being the analysis of the data collected. What the two have in common is that they have built-in forecasting models, but they’re built only to describe the historical development, which makes them less accurate. Opposite to that is the emergence of machine learning that has enabled predictive models that are optimized to describe the future and are therefore more accurate.

Many organizations use some sort of ERP/IB constellation today, which can generate an understanding of past events and thus draw insights from it to alter production and budget accordingly, but, unfortunately, the lack of predictive measures and low accuracy forces users to draw insights based on inadequate accuracy compared to factual predictive data.

Programming statistical models in R -
the good, the bad, and the ugly

Programming statistical models in R to forecast is somewhat common. Many companies hire a statistician to do just so. However, it can quickly become a costly and time-consuming endeavour considering all models in use need to be examined and all variables need to be tested. It’s also not guaranteed that a statistician could do it single-handedly unless they are knowledgeable in both programming and statistics.

It’s also an advantage to be a skilled mathematician for the purpose of understanding if new theories and models are useful for the purpose. But finding a person who is both knowledgeable in many subjects and who at the same time understands market drivers is rare. For the risk-averse, it’s wise to avoid having solely one employee on this considering how difficult it is for someone to fully grasp code built by someone else.

A platform like Indicio is built with the user in mind - to simplify forecasting. Built on machine learning, the interface enables you to effortlessly conduct forecasts based on the latest research and run advanced statistical models and incorporate big data simultaneously.

It chooses the most appropriate machine learning algorithms, then automatically optimizes data pre-processing, feature engineering, and tuning parameters for each algorithm to create and rank highly accurate models. It then recommends the best weighted set of models to deploy for the company’s data and prediction target.

Thus, instead of creating statistical models that would take a statistician months to develop, Indicio can build hundreds of models and deploy the best-performing model within hours.

Virtual demo

View our click-through demo

Experience the ease and accuracy of Indicio’s automated forecasting platform firsthand. Click to start a virtual demo today and discover how our cutting-edge tools can streamline your decision-making process.