Forecasting: Evaluation Criteria

To continue our series on forecasting, let’s discuss one of the varying factors: the evaluation criteria. In classification, the percentage of accuracy is often used. It is obvious and easy to interpret. In the case of regression (e.g. forecasting), this is more complex.

Whatever the application and the prediction method used, at one point, performances need to be evaluated. One motivation to evaluate results is to choose the most appropriate forecasting algorithm. Another one is to avoid overfitting. Thus, choosing the right criterion for your problem is a key step. In this post, we will focus on three accuracy measures.

The Root Mean Square Error (RMSE) is certainly the most used measure. It is mainly due to its simplicity and usage in other domains. Its equation is given below:

The main drawback of RMSE is to be scale dependent. It is thus not possible to compare two different time series. The second one is the Mean Absolute Percentage Error (MAPE). It is scale independent:

Its main issue is to be undefined when the denominator is null. This may happen often with intermittent data. The third error measure is the Mean Absolute Scaled Error (MASE). The naïve forecast (last value) can be used as the denominator:

The measure is scale independent and if below 1, better than naïve forecast (a good benchmark).

What error measure do you use and why? Post a comment to share your opinion.

Note: MASE equation updated on January 16th 2013.


Recommended Reading

Comments Icon3 comments found on “Forecasting: Evaluation Criteria

  1. Such measure are really helpful for us (analysts) not least to keep track of our progress, but it’s not that interresting for business.

    You couldn’t convince anyone with such a 4 letter acronym. The only indicator that count here is the amount of money such forecasting can save versus the current model. That need some post processing to derive $ value from the model. Sometimes decreasing the RMSE is not saving money, quite the opposite.

    For the analyst, the RMSE/MAE (Mean average error) or the correlation between forecast and the testing set can give insights if we are a bit wrong most of the time, or very wrong on some critical spots.

Comments are closed.