TL;DR
WAPE (Weighted Absolute Percentage Error) measures forecast accuracy by dividing the sum of absolute errors by the sum of actual values — weighting each period by its size rather than averaging period-level percentages. WAPE is the right accuracy metric when actual values vary widely (e.g., enterprise SaaS forecasts where a single $2M deal dominates a quarter). For SaaS sales forecasts, WAPE under 10% is excellent; 10–20% is acceptable.
What is WAPE?
WAPE (Weighted Absolute Percentage Error, sometimes called weighted MAPE) is a forecast-accuracy metric that aggregates absolute errors across periods and divides by the sum of actuals — instead of computing per-period percentage errors and averaging them. The result is a percentage measure that weights large periods more heavily than small ones.
MAPE (Mean Absolute Percentage Error) computes percentage error per period, then averages — giving each period equal weight regardless of size. WAPE computes total error across all periods, then divides by total actual — giving large periods proportionally more influence. WAPE = Σ|Actual − Forecast| ÷ Σ Actual.
WAPE is the better accuracy metric when actual values vary widely. In enterprise SaaS sales forecasting, a single $2M deal can dominate a quarter while another quarter has only $500K of activity. MAPE would weight a 30% error on the small quarter equally with a 5% error on the large quarter — concealing the fact that absolute dollar accuracy is much better than the average suggests. WAPE corrects for this.
Why WAPE matters for operators
Forecast accuracy directly affects how much buffer the business needs to maintain. A company with 8% WAPE can plan with tight cash buffers; a company with 25% WAPE needs to hold larger reserves and delay investment decisions until the period is more resolved. WAPE is the most decision-useful accuracy metric for revenue forecasts because it expresses error in proportional dollar terms.
WAPE also pairs naturally with MAPE as a diagnostic. When MAPE and WAPE diverge significantly (e.g., MAPE = 22%, WAPE = 9%), the small-cohort periods are driving the average — usually a sign that quiet quarters or small segments have noisy forecasts but the dollar-weighted accuracy is fine. When WAPE is higher than MAPE, the large periods are the inaccurate ones, which is the more concerning pattern.
The trap operators fall into is reporting only one accuracy metric. WAPE alone hides accuracy problems on small forecast units (territories, segments, products); MAPE alone over-weights small-unit noise. Best practice is to track both and explain divergence.
WAPE formula
WAPE = Σ |Actual − Forecast| ÷ Σ Actual × 100
Comparison with MAPE:
MAPE = (1/n) × Σ (|Actual − Forecast| / Actual) × 100
Each period's percentage error is computed first, then averaged.
WAPE pools all errors against pooled actuals. The two metrics
diverge when actual values vary widely period-to-period.
Example — quarterly SaaS forecast (4 quarters):
Q1: Forecast $1.2M, Actual $1.0M, |Error| = $200K (20%)
Q2: Forecast $0.5M, Actual $0.6M, |Error| = $100K (16.7%)
Q3: Forecast $2.4M, Actual $2.5M, |Error| = $100K (4%)
Q4: Forecast $4.0M, Actual $3.8M, |Error| = $200K (5.3%)
MAPE = mean(20%, 16.7%, 4%, 5.3%) = 11.5%
WAPE = ($200K + $100K + $100K + $200K) / ($1.0M + $0.6M + $2.5M + $3.8M)
= $600K / $7.9M = 7.6%
WAPE (7.6%) is materially lower than MAPE (11.5%) because the
high-percentage errors occurred on the smallest quarters.
Dollar-weighted, the forecast is more accurate than MAPE suggests. WAPE benchmarks by forecast type
| Forecast type | Healthy WAPE | Excellent WAPE | Best with MAPE divergence | Action if elevated |
|---|---|---|---|---|
| B2B SaaS quarterly revenue | 10–18% | <10% | Compare quarterly + monthly views | Tighten qualification at stage transitions |
| Enterprise sales (large deals) | 12–20% | <12% | WAPE preferred; large deal variance | Decompose by deal-size band |
| Demand / inventory forecasting | 8–15% | <8% | WAPE preferred for SKU mix | Improve seasonal model |
| Marketing-attributed revenue | 15–25% | <15% | Compare touchpoint models | Layer in MMM + holdouts |
| Cash-collection / DSO forecast | 5–12% | <5% | WAPE preferred for big invoices | Tighten collections SLA |
Sources: Pavilion RevOps Benchmark Survey 2024; Mosaic FP&A benchmarks 2025; Gartner Forecasting Accuracy Survey 2024; Fairview customer data.
Common mistakes when using WAPE
1. Reporting MAPE when WAPE is the better metric. When actual values vary widely (enterprise SaaS, lumpy revenue, large vs. small territories), MAPE over-weights small-cohort noise. WAPE is the better metric for proportional dollar accuracy. Default to WAPE when actuals span more than 5× variance across periods or units.
2. Using WAPE without tracking MAPE alongside. WAPE alone hides systematic inaccuracy on small forecast units. A team with 8% WAPE and 30% MAPE has a fine aggregate forecast but a structural problem with small-territory or small-segment forecasting. Track both.
3. Comparing WAPE across companies with different revenue mix. A SaaS company with one $2M deal per quarter and a SaaS company with twenty $100K deals per quarter will have different inherent WAPE floors. Compare WAPE against your own historical baseline, not against another company's published number, unless their motion mirrors yours.
4. Treating WAPE as a substitute for forecast bias tracking. WAPE is a magnitude metric — it treats over- and under-forecasts equally. A team with 8% WAPE and 7% positive bias is consistently over-forecasting and the WAPE number doesn't expose it. Always track WAPE alongside bias for full diagnostic.
5. Computing WAPE on too-short a window. Forecast accuracy needs at least 4–6 periods to stabilise. WAPE on 2 quarters is dominated by single-quarter variance and is rarely diagnostic. Use trailing 6–8 quarters and refresh quarterly.
How Fairview tracks WAPE alongside other accuracy metrics
Fairview's Forecast Confidence Engine computes WAPE, MAPE, and forecast bias together — segmented by team, segment, and forecast type — so accuracy diagnostics surface the specific dimension where forecast quality is weakest.
The Next-Best Action Engine flags asymmetric accuracy: "Q3 forecast WAPE was 7.4%, MAPE was 19.2%. Divergence is concentrated in the SMB segment, where small-deal forecasts are noisy. Recommend dropping per-deal SMB forecasting and using volume-based forecasting (deal count × ACV bands) instead."
WAPE vs MAPE vs forecast bias
MAPE measures average accuracy per period. WAPE measures dollar-weighted accuracy. Forecast bias measures direction of error. The three together describe forecast quality; any one alone has a blind spot.
| WAPE | MAPE | Forecast bias | |
|---|---|---|---|
| Measures | Pooled error magnitude | Average percentage error | Directional error |
| Best when | Actuals vary widely | Actuals are similar in size | Detecting systematic over/under-forecasting |
| Risk | Hides small-unit problems | Over-weights small-unit noise | Doesn't measure magnitude |
| Use together | WAPE + MAPE diagnose where error is | MAPE + bias diagnose distribution | Bias + WAPE = full accuracy picture |
At a glance
- Category
- Sales Forecasting
- Related
- 5 terms
Frequently asked questions
What is WAPE in simple terms?
WAPE (Weighted Absolute Percentage Error) measures forecast accuracy by dividing the total absolute error across all periods by the total actual value across all periods. Unlike MAPE, which averages per-period percentages, WAPE weights each period by its size — making it the better metric when actual values vary widely period to period.
How is WAPE different from MAPE?
MAPE computes percentage error for each period separately and then averages them, treating each period as equally important. WAPE pools the errors and the actuals, dividing total error by total actual — giving large periods proportionally more influence. When actuals vary by 5× or more across periods, WAPE is the more accurate aggregate measure.
When should you use WAPE instead of MAPE?
Use WAPE when actual values vary widely across the forecasting cohort — typical for enterprise SaaS sales (large deals dominate quarters), demand forecasting (seasonal SKU mix), or large-account-concentrated revenue. Use MAPE when actuals are roughly comparable in size and you want to know average per-period accuracy. Best practice: track both and report when they diverge significantly.
What's a good WAPE for sales forecasting?
For B2B SaaS quarterly revenue forecasts, WAPE under 10% is excellent, 10–18% is acceptable, above 20% means the forecast model needs recalibration. Enterprise motions with lumpy revenue tend to run higher (12–20%) due to deal-timing variance. Compare against your own trailing baseline, not absolute targets.
Should you track WAPE alongside forecast bias?
Yes, always. WAPE measures error magnitude; forecast bias measures error direction. A team can have low WAPE (small errors) but high bias (errors all in the same direction) — meaning the forecast is consistently over- or under-shooting in a way that systematically misleads operating decisions. The two metrics together describe forecast quality completely.
Sources
- Pavilion RevOps Benchmark Survey 2024
- Mosaic FP&A Benchmarks 2025
- Gartner Forecasting Accuracy Survey 2024
- ICONIQ Growth Topline Report 2025
- Fairview customer data (B2B SaaS, 2025)
Fairview is an operating intelligence platform that tracks WAPE, MAPE, and forecast bias together — so accuracy diagnostics show not just how big the error is but where in the forecast it is concentrated. Start your free trial →
Siddharth Gangal is the founder of Fairview. He built the multi-metric forecast accuracy layer after watching three CFOs report 12% MAPE proudly while their WAPE — the dollar-weighted truth — sat at 22% because all the accuracy was in the small territories that didn't move the plan.
See it in Fairview
Track WAPE (Weighted Absolute Percentage Error) automatically.
14-day free trial. No credit card. First data source connected in 5 minutes.