Time Series Evaluation Metrics

Time Series Evaluation Metrics – MAPE vs WMAPE vs SMAPE

Time Series Evaluation Metrics

Introduction

In time series forecasting, choosing the right evaluation metric is as important as building the model itself. Metrics not only quantify prediction accuracy but also help in refining models to better suit business goals. From weather forecasting and stock price prediction to demand planning and inventory control—accuracy determines real-world impact.

Among the most widely used metrics are Mean Absolute Percentage Error (MAPE), Weighted Mean Absolute Percentage Error (WMAPE), and Symmetric Mean Absolute Percentage Error (SMAPE).

  • MAPE expresses forecast error as a percentage, making it easy to interpret, but can exaggerate errors when actual values are small.
  • WMAPE adjusts for this by weighting errors according to actual values, reducing distortion from low-value data points.
  • SMAPE normalises error using both actual and predicted values, giving a more balanced view of over- and under-predictions.

The right choice depends on the dataset’s characteristics and the goals of the analysis. Let’s break them down.

Machine Learning Tutorial:-Click Here
Data Science Tutorial:-
Click Here

Complete Advance AI topics:-CLICK HERE
DBMS Tutorial:-
CLICK HERE

1. MAPE – Mean Absolute Percentage Error

Definition:
MAPE measures the average absolute percentage difference between forecasted and actual values. It’s one of the most popular metrics in forecasting due to its simplicity and intuitive interpretation.

Formula: MAPE=1n∑t=1n∣At−FtAt∣×100MAPE = \frac{1}{n} \sum_{t=1}^{n} \left| \frac{A_t – F_t}{A_t} \right| \times 100

Where:

  • AtA_t = Actual value
  • FtF_t = Forecast value

How it works:
For each observation, the absolute error is calculated as a percentage of the actual value, and then averaged across all observations.

Advantages:

  • Easy to interpret – Errors are expressed in percentage terms.
  • Comparable across scales – Works with different units and magnitudes.

Limitations:

  • Division by zero problem – Undefined if any actual value is zero.
  • Highly sensitive to low values – Small actual values can cause inflated percentage errors.

Best suited for:
Datasets with relatively large and consistent actual values.

2. WMAPE – Weighted Mean Absolute Percentage Error

Definition:
WMAPE refines MAPE by weighting errors based on actual values. This prevents small actual values from disproportionately influencing the final metric.

Formula: WMAPE=∑t=1n∣At−Ft∣∑t=1nAt×100WMAPE = \frac{\sum_{t=1}^{n} |A_t – F_t|}{\sum_{t=1}^{n} A_t} \times 100

How it works:
Instead of simply averaging percentage errors, WMAPE divides the total absolute error by the total sum of actual values, making larger values more influential in the accuracy score.

Advantages:

  • Reduces small value bias – Less distortion from near-zero actuals.
  • Good for diverse datasets – Works well when actual values vary greatly.

Limitations:

  • Bias toward large values – May overemphasize accuracy for higher-value observations.

Best suited for:
Sales, revenue, and demand forecasting where the significance of an error depends on the size of the actual value.

3. SMAPE – Symmetric Mean Absolute Percentage Error

Definition:
SMAPE modifies MAPE to make the metric symmetric, considering both actual and forecasted values in its calculation. This balances the penalty for over- and under-predictions.

Formula: SMAPE=1n∑t=1n∣At−Ft∣(∣At∣+∣Ft∣)/2×100SMAPE = \frac{1}{n} \sum_{t=1}^{n} \frac{|A_t – F_t|}{(|A_t| + |F_t|)/2} \times 100

How it works:
The absolute error is divided by the average of actual and forecast values, reducing skew when there are large discrepancies.

Advantages:

  • Balanced treatment – Equal penalty for over- and under-estimation.
  • Effective for high variance data – Handles datasets with wide fluctuations better than MAPE.

Limitations:

  • Interpretation complexity – Ranges from 0% to 200%, which can be less intuitive.
  • Zero value issue – Can still encounter challenges when both values are zero.

Best suited for:
Financial and economic forecasting where equal weighting of errors in both directions is important.

4. Comparison Table – MAPE vs WMAPE vs SMAPE

Metric Formula Base Range Best For Key Weakness
MAPE % error from actual value 0% – ∞ Datasets with stable, non-zero actual values Inflated errors for small actuals
WMAPE Weighted by sum of actuals 0% – ∞ Datasets with varying magnitudes Bias towards large values
SMAPE Normalised by average of actual & forecast 0% – 200% High variance datasets Less intuitive range

5. Practical Use Cases

  • MAPE: General forecasting when all actual values are significantly above zero and interpretability is a priority.
  • WMAPE: Business cases like retail sales or logistics forecasting where large value predictions carry more importance.
  • SMAPE: Financial market predictions or economic indicators where balanced error handling is crucial.

Complete Python Course with Advance topics:-Click Here
SQL Tutorial :–Click Here

Download New Real Time Projects :–Click here

Conclusion

MAPE, WMAPE, and SMAPE each offer distinct advantages depending on the dataset and forecasting goals:

  • MAPE is intuitive and easy to explain but struggles with low-value actuals.
  • WMAPE adjusts for this, making it better for datasets with wide value ranges.
  • SMAPE ensures fairness between over- and under-predictions, ideal for high-variance or volatile data.

Choosing the right metric is not about finding the “best” one universally—it’s about aligning the metric with your data characteristics and business objectives.

For stable datasets, MAPE might be all you need. For varied scales, WMAPE is more reliable. And for balanced evaluation across fluctuating data, SMAPE is the go-to choice.

At Updategadh, our approach to forecasting always begins with selecting the right accuracy metric—because better measurement leads to better decisions.


time series evaluation metrics in machine learning
time series evaluation metrics python
time series evaluation metrics example
explain performance evaluation with respect to time series forecasting
time series error metrics
autogluon-timeseries
forecast metrics
autogluon metrics

 

    Share this content:

    Post Comment