TL;DR
MAPE (Mean Absolute Percentage Error) measures the average percentage distance between forecasted and actual values — it's the standard accuracy metric for sales, demand, and revenue forecasts. For B2B SaaS sales forecasts, MAPE below 10% is excellent, 10–20% is acceptable, above 20% means the forecast model needs recalibration.
What is MAPE?
Mean Absolute Percentage Error (MAPE, also called mean absolute percentage deviation or MAPD) is the average of the absolute percentage differences between forecasted and actual values over a series of periods. It is the most commonly used metric for measuring forecast accuracy because it is scale-independent — a 12% MAPE means the forecast was off by 12% regardless of whether the absolute values are $100K or $10M.
MAPE = (1/n) × Σ |((Actual − Forecast) / Actual)| × 100. The absolute value means positive and negative errors are both counted as errors — a 10% overforecast and a 10% underforecast are equally bad. This makes MAPE a pure accuracy metric, not a bias metric. (See forecast bias for the directional companion metric.)
MAPE is widely used across revenue forecasting (did the CRO predict closed-won within X%?), demand planning (did the inventory model predict sell-through within X%?), and financial planning (did the budget model predict actual costs within X?). For operators running weekly revenue reviews, MAPE is the single number that answers: "how accurate are our forecasts?"
Why MAPE matters for operators
Forecast MAPE directly affects how much buffer an operator needs to maintain. A company with 5% MAPE can plan with tight buffers — the forecast is reliable. A company with 25% MAPE needs to maintain larger cash reserves, pad hiring timelines, and hold back investment decisions until the quarter is further along — because the forecast is unreliable.
MAPE also surfaces forecast model quality. If MAPE is rising quarter over quarter, something in the forecasting process is deteriorating — either the input data (pipeline data quality is declining) or the model (the forecast methodology doesn't account for a changing sales environment). Monitoring MAPE as a KPI for the RevOps and FP&A functions creates accountability for forecast quality, not just forecast inputs.
For board reporting, MAPE is often cited as a risk metric. Investors and board members use historical MAPE to discount management's forward guidance — a team with 5% MAPE gets closer-to-face-value treatment of their projections than a team with 30% MAPE that habitually needs to revise mid-quarter.
MAPE formula
MAPE caveats:
MAPE = (1/n) × Σ |((Actual − Forecast) / Actual)| × 100 Where: n = number of periods (months, quarters) |·| = absolute value (positive/negative errors treated equally) Example (6 quarters): Q1: Forecast $1.80M, Actual $1.72M → |($1.72−$1.80)/$1.72| = 4.65% Q2: Forecast $2.10M, Actual $2.24M → |($2.24−$2.10)/$2.24| = 6.25% Q3: Forecast $1.90M, Actual $1.71M → |($1.71−$1.90)/$1.71| = 11.1% Q4: Forecast $2.40M, Actual $2.33M → |($2.33−$2.40)/$2.33| = 3.00% Q5: Forecast $2.20M, Actual $1.93M → |($1.93−$2.20)/$1.93| = 14.0% Q6: Forecast $2.55M, Actual $2.61M → |($2.61−$2.55)/$2.61| = 2.30% MAPE = (4.65 + 6.25 + 11.1 + 3.00 + 14.0 + 2.30) / 6 = 6.88% Interpretation: Forecast was off by an average of 6.88% per quarter — strong.
- MAPE is undefined when Actual = 0 (division by zero) — use WMAPE (weighted MAPE) if any actual periods are zero
- MAPE is asymmetric — a 50% overforecast and a 50% underforecast are not symmetric errors (the denominator differs)
- For very small values, MAPE can be misleading — a $5K forecast error on a $40K quarter is 12.5% MAPE, identical to a $500K error on a $4M quarter
- Always pair MAPE with forecast bias (MFE) to get both magnitude and direction of error
MAPE benchmarks by forecast type
| Forecast type | Excellent MAPE | Good MAPE | Needs work | Common cause of high MAPE |
|---|---|---|---|---|
| B2B SaaS — quarterly sales | <8% | 8–15% | >20% | Pipeline data quality, deal-stage accuracy |
| B2B SaaS — monthly revenue | <5% | 5–12% | >15% | Timing of large enterprise deals |
| D2C — weekly demand | <10% | 10–20% | >25% | Promotional spikes, stockouts |
| FP&A — annual budget | <5% | 5–10% | >15% | Planning assumptions not updated mid-year |
| SaaS — ARR expansion | <10% | 10–18% | >22% | Expansion model doesn't capture churn timing |
Sources: Pavilion RevOps Benchmark Survey 2024; SaaStr 2025 Sales Forecasting Survey; Mosaic FP&A Benchmarks 2025; Fairview customer data.
Common mistakes when using MAPE
1. Using MAPE as the only forecast quality metric. MAPE measures accuracy; it doesn't reveal direction. A 10% MAPE with +10% consistent bias is different from a 10% MAPE with zero bias — the first requires systematic correction, the second requires model improvement. Always track MAPE alongside forecast bias (MFE).
2. Averaging MAPE across periods with very different scales. If Q1 revenue was $400K and Q4 was $1.8M, a 10% error in Q1 ($40K) gets equal weight to a 10% error in Q4 ($180K) in simple MAPE. Use WMAPE (weighted by revenue) when periods have significantly different scales.
3. Not measuring MAPE at the component level. Company-wide MAPE can look acceptable while new-business MAPE is 30% and expansion MAPE is 5%. The two error sources require different fixes. Decompose MAPE by revenue type, team, and region.
4. Setting MAPE targets without reference to what's achievable in your sales motion. A 5% quarterly MAPE target is realistic for a PLG company where product data drives revenue prediction. It's unrealistic for an enterprise company with 90-day cycles, 10-deal quarters, and large individual deal variance. Set MAPE targets relative to forecast type and deal volume.
5. Not acting on MAPE trends before they become crises. A MAPE that climbs from 9% to 14% to 21% over three quarters is a signal that the forecast model or data inputs are degrading. By the time MAPE is 21%, the operating plan built on those forecasts is materially off. Track MAPE as a leading indicator, not a lagging audit.
How Fairview tracks MAPE automatically
Fairview's Forecast Confidence Engine calculates MAPE automatically on a rolling 6–8 quarter basis by comparing CRM pipeline and stage-weighted forecasts to actual closed-won revenue — segmented by team, rep, and revenue type.
The Next-Best Action Engine flags MAPE deterioration: "Quarterly sales forecast MAPE increased from 11% (trailing 4Q average) to 18% this quarter through Week 8. The primary driver is three enterprise deals that moved from Commit to Late Stage without closing as projected. Review qualification status and push-pull timing for these accounts before finalising Q3 board guidance."
Companies using Fairview typically reduce forecast MAPE by 5–10 percentage points within two quarters by identifying the specific deal stages and rep cohorts where forecast accuracy is lowest — and applying targeted qualification processes to those areas.
At a glance
- Category
- Sales Forecasting
- Related
- 5 terms
Frequently asked questions
What is MAPE in simple terms?
The average percentage by which your forecast was wrong — in either direction — over a series of periods. If your quarterly revenue forecasts were off by 4%, 11%, 8%, and 7% over four quarters, your MAPE is 7.5%. Lower is better. It's the standard way to answer: 'how accurate are our forecasts?'
What is a good MAPE for sales forecasting?
For B2B SaaS quarterly forecasting: below 8% is excellent, 8–15% is acceptable, above 20% means the forecast model needs recalibration. For monthly revenue: below 5% is strong. For D2C weekly demand: below 10% is healthy. The right benchmark depends on deal volume — a company closing 5 enterprise deals per quarter will have higher MAPE than one closing 200 SMB deals, purely due to the statistical variance of small samples.
What is the difference between MAPE and forecast bias?
MAPE measures the average magnitude of forecast errors (how far off, in either direction). Forecast bias (MFE — Mean Forecast Error) measures the directional component (consistently too high or too low). A forecast can have low MAPE (accurate) but high bias (systematically off in one direction). Track both: MAPE for accuracy, MFE for direction.
When should you use WMAPE instead of MAPE?
Use WMAPE (Weighted MAPE) when your periods have significantly different revenue scales — for example, if Q4 is 3× larger than Q1. WMAPE = Σ|Actual − Forecast| / ΣActual × 100. It weights errors by scale, preventing small-period errors from inflating the average. Simple MAPE weights all periods equally, which overstates the impact of small-period errors.
How do you improve MAPE?
Three levers: improve input data quality (pipeline data accuracy, stage definitions, deal probability scores), improve the forecast model (move from subjective rep-committed to weighted or AI-assisted models), and increase deal volume in each period (larger samples reduce variance in percentage terms). The fastest MAPE improvement usually comes from enforcing consistent stage definitions — this alone reduces variance by 5–10 percentage points in most mid-market teams.
Sources
- Pavilion RevOps Benchmark Survey 2024
- SaaStr 2025 SaaS Benchmark Report
- Mosaic FP&A Benchmarks 2025
- OpenView SaaS Benchmarks 2025
- Fairview customer data (B2B SaaS, 2025)
Fairview is an operating intelligence platform that tracks forecast MAPE automatically — comparing CRM projections to closed-won actuals quarter over quarter, segmented by team and revenue type. Start your free trial →
Siddharth Gangal is the founder of Fairview. He built the Forecast Confidence Engine after watching operators present board guidance that had been running 22% MAPE for six quarters — a pattern that was obvious in the historical data but invisible to anyone not tracking it systematically.
See it in Fairview
Track MAPE (Mean Absolute Percentage Error) automatically.
14-day free trial. No credit card. First data source connected in 5 minutes.