Forecasting: Evaluation Criteria
To continue our series on forecasting, let’s discuss one of the varying factors: the evaluation criteria. In classification, the percentage of accuracy is often used. It is obvious and easy to interpret. In the case of regression (e.g. forecasting), this is more complex.
Whatever the application and the prediction method used, at one point, performances need to be evaluated. One motivation to evaluate results is to choose the most appropriate forecasting algorithm. Another one is to avoid overfitting. Thus, choosing the right criterion for your problem is a key step. In this post, we will focus on three accuracy measures.
The Root Mean Square Error (RMSE) is certainly the most used measure. It is mainly due to its simplicity and usage in other domains. Its equation is given below:
Its main issue is to be undefined when the denominator is null. This may happen often with intermittent data. The third error measure is the Mean Absolute Scaled Error (MASE). The naïve forecast (last value) can be used as the denominator:
What error measure do you use and why? Post a comment to share your opinion.
Note: MASE equation updated on January 16th 2013.