Forecasting: Evaluation Criteria

October 15, 2012 by Sandro Saitta
Filed under: Uncategorized 

If you're new here, you may want to subscribe to my RSS feed. Thanks for visiting!

To continue our series on forecasting, let’s discuss one of the varying factors: the evaluation criteria. In classification, the percentage of accuracy is often used. It is obvious and easy to interpret. In the case of regression (e.g. forecasting), this is more complex.

Whatever the application and the prediction method used, at one point, performances need to be evaluated. One motivation to evaluate results is to choose the most appropriate forecasting algorithm. Another one is to avoid overfitting. Thus, choosing the right criterion for your problem is a key step. In this post, we will focus on three accuracy measures.

The Root Mean Square Error (RMSE) is certainly the most used measure. It is mainly due to its simplicity and usage in other domains. Its equation is given below:

forRMSE
The main drawback of RMSE is to be scale dependent. It is thus not possible to compare two different time series. The second one is the Mean Absolute Percentage Error (MAPE). It is scale independent:

forMAPE
Its main issue is to be undefined when the denominator is null. This may happen often with intermittent data. The third error measure is the Mean Absolute Scaled Error (MASE). The naïve forecast (last value) can be used as the denominator:

forMASE2
The measure is scale independent and if below 1, better than naïve forecast (a good benchmark).

What error measure do you use and why? Post a comment to share your opinion.

Note: MASE equation updated on January 16th 2013.

No TweetBacks yet. (Be the first to Tweet this post)
  • Share/Bookmark

Comments

3 Comments on Forecasting: Evaluation Criteria

  1. Sébastien Derivaux on Sat, 20th Oct 2012 9:06 pm
  2. Such measure are really helpful for us (analysts) not least to keep track of our progress, but it’s not that interresting for business.

    You couldn’t convince anyone with such a 4 letter acronym. The only indicator that count here is the amount of money such forecasting can save versus the current model. That need some post processing to derive $ value from the model. Sometimes decreasing the RMSE is not saving money, quite the opposite.

    For the analyst, the RMSE/MAE (Mean average error) or the correlation between forecast and the testing set can give insights if we are a bit wrong most of the time, or very wrong on some critical spots.

  3. Chloe Handbag online on Sun, 17th Nov 2013 6:36 am
  4. I like your website! Very interesting and informative posts and therefore original! I’m following yours now!

  5. gucci bags cleaning on Mon, 18th Nov 2013 2:55 pm

Tell me what you're thinking...





  • Swiss Association for Analytics

  • T-shirts, Mugs & Mousepads


    All benefits given to a charity association
  • Data Mining Search Engine

    Supported by AnalyticBridge

  • Archives

  • Reading Recommandations