Revenue · Cluster 3 Spoke

Forecast Accuracy: How to Measure and Improve It

The forecast accuracy metrics operators use (MAPE, WAPE, bias), the benchmark ranges, and the seven levers that move the number from ±15% to ±5%.

By Siddharth Gangal · Founder, Fairview · Updated April 13, 2026 · 12 min read

Forecast accuracy target with three arrow clusters showing tight accuracy, high variance, and systematic bias

TL;DR

  • Forecast accuracy = 1 − |Forecast − Actual| ÷ Forecast, at segment level.
  • Use MAPE for equal-weighted accuracy, WAPE when deal sizes vary widely.
  • Target: ±5% at segment level for mature B2B SaaS; ±10% at scale stage.
  • Track bias (signed variance) alongside accuracy. A team that always beats forecast is biased low, not disciplined.
  • The seven levers below move most teams from ±15% to ±5% within two quarters.

Forecast accuracy is the single most load-bearing number a RevOps team publishes. Hiring plans, cash runway, and board trust all rest on it. And most growth-stage B2B companies quietly operate with ±15% accuracy while reporting ±5% — because they average the hits and misses and call it good.

This post covers the forecast accuracy metrics that actually diagnose what is broken (MAPE, WAPE, bias), the benchmark ranges by stage, and the seven levers that move the number week over week. It is a companion to the RevOps pillar, the 12 RevOps KPIs, and the weekly revenue review.

What is forecast accuracy?

Definition

Forecast accuracy: how closely a predicted revenue, pipeline, or sales number matches the actual outcome, expressed as a percentage. A forecast of $2M against an actual of $1.9M is 95% accurate (5% variance). Tracked at segment level, not blended, because blended accuracy hides offsetting errors.

The metric matters because everything downstream depends on it. Hiring plans assume the revenue. Ad spend assumes the revenue. Runway math assumes the revenue. When the forecast is ±15%, "plan for Q3" is a gamble.

The three core formulas

Forecast accuracy formula cards for single-period accuracy, MAPE mean absolute percentage error, and WAPE weighted absolute percentage error
Single-period accuracy, MAPE, and WAPE. Each answers a different question about the same forecast.

1. Single-period accuracy

Accuracy = 1 − |Forecast − Actual| ÷ Forecast

The simplest form. A $2.0M forecast against a $1.9M actual gives |0.1| ÷ 2.0 = 5% error, so 95% accuracy. Use this for one segment or one quarter.

2. MAPE (mean absolute percentage error)

MAPE = Σ |Forecast₁ − Actual₁| ÷ Forecast₁ ÷ n

Average the absolute percentage error across n segments (or n periods). Each segment contributes equally regardless of dollar size. Good for benchmarking teams of similar scale; bad when one segment dwarfs the others.

3. WAPE (weighted absolute percentage error)

WAPE = Σ |Forecast₁ − Actual₁| ÷ Σ Actual₁

Sum the absolute errors, divide by the sum of actuals. Naturally weights large segments more. For B2B SaaS with uneven deal sizes or enterprise vs SMB splits, WAPE is usually the more honest number.

Key insight

MAPE treats a 10% miss on a $100K segment the same as a 10% miss on a $3M segment. WAPE does not. Pick WAPE when dollars matter more than team fairness.

Accuracy is not enough: measure bias

Absolute accuracy hides direction. A team that misses forecast by −12% one quarter and +12% the next has 0% average variance and 12% average accuracy. Sounds great. It is not great; it means the forecasting process is noisy and unpredictable.

Bias is the signed version: the systematic tendency to over- or under-forecast. Four quarters of +7, +5, +8, +6 variance says the team consistently beats forecast — which sounds good but is actually sandbagging worth fixing.

Bias = Σ (Actual₁ − Forecast₁) ÷ Σ Forecast₁

Positive bias = systematically under-forecasting (sandbagging). Negative bias = systematically over-forecasting (happy ears). Either pattern is a process problem worth fixing even when absolute accuracy looks fine.

Benchmarks by segment and stage

Stage / segmentHealthy accuracyBest-in-classIntervene above
Pre-scale / < $5M ARR±15%±10%±20%
Scale / $5–$25M ARR±10%±5%±15%
Growth / $25M+ ARR±5%±3%±10%
SMB segment±7%±5%±12%
Enterprise segment±10%±5%±15%

Enterprise forecasts tolerate a bit more variance because deal size dominates count — a single $400K opp slipping can move the number 8% by itself. SMB should be tighter because the law of large numbers works in the forecaster’s favor.

Seven levers to improve forecast accuracy

Seven levers to improve forecast accuracy: segment, stage gates, conversion weighting, bias tracking, weekly reconcile, calibration, single source
Seven levers, ranked by impact. Most teams see ±15% → ±5% within two quarters of working through them.
  1. Forecast by segment, not blended. SMB, Mid-Market, Enterprise forecast separately, then roll up. Blended forecasts hide offsetting errors.
  2. Tighten stage gate definitions. If "Stage 3" means different things to different reps, conversion assumptions are meaningless. Write the definitions down. Enforce them.
  3. Use conversion-based weighting. Replace gut-feel "most likely" fields with historical conversion rates by stage. A Stage 3 opp at 40% historical conversion contributes 40%, not 100%, to the forecast.
  4. Track bias as seriously as accuracy. Publish both numbers every quarter. Reward calibration, not just hitting the number.
  5. Reconcile weekly, not monthly. A forecast that only updates at month-end discovers its miss at month-end. Weekly reconciliation lets you course-correct.
  6. Calibrate against 4 rolling quarters. Your conversion rates drift with market, team, and product changes. Refresh the model quarterly with the trailing 4 quarters of actuals.
  7. One source of truth. When marketing, sales, and finance pull the forecast from different tools, the first hour of every forecast meeting is spent reconciling numbers. Pick one source. Defend it.

Quote-ready

A forecast team that is right about the total but wrong about every segment is lucky, not skilled. Segment-level variance is the real scoreboard.

How Fairview scores forecast accuracy automatically

Fairview forecast confidence dashboard showing segment-level accuracy, bias, and variance against commit and best-case
Fairview computes segment-level accuracy, bias, and confidence on every open quarter.

Fairview connects natively to HubSpot, Salesforce, Pipedrive, Stripe, QuickBooks, Xero, Google Ads, and Meta Ads. The Forecast Confidence Engine calculates segment-level MAPE, WAPE, and bias automatically using the trailing four quarters of actuals, then scores the current forecast against the historical pattern.

When a segment forecast drifts outside its historical band, Fairview writes a named next-best action: "Mid-Market forecast is 12% higher than the 4-quarter rolling bias-adjusted trend. Two Stage 3 deals above the historical conversion rate for their ACV band. Review before commit."

Pair with the Operating Dashboard and Pipeline Health Monitor for the full view. See pricing for the plan that fits your forecast rhythm.

MAPE · WAPE

Both, at segment level

Bias

Signed, tracked quarterly

4Q rolling

Auto-calibration

Key takeaways

  • Measure accuracy at segment level — blended numbers hide offsetting misses.
  • Use MAPE when segments are similar in size; WAPE when dollars vary.
  • Track bias alongside accuracy. Sandbagging and happy ears are both process bugs.
  • Scale-stage target: ±10%. Growth-stage target: ±5%.
  • The seven levers move most teams from ±15% to ±5% in two quarters.

See segment-level forecast accuracy every week.

Connect your CRM and billing. Fairview calculates MAPE, WAPE, and bias automatically, then scores this quarter’s commit against your historical pattern. 14-day trial, no card required.

Book a demoStart free trial

Frequently asked questions

Forecast accuracy measures how closely a predicted revenue, pipeline, or sales number matches the actual outcome. It is expressed as a percentage, usually calculated as 1 minus the absolute variance divided by the forecast, and tracked at segment level rather than blended because blended numbers hide offsetting misses.

Use 1 minus the absolute value of (forecast minus actual) divided by forecast, expressed as a percentage. For a set of segments, compute MAPE (mean absolute percentage error) or WAPE (weighted absolute percentage error). WAPE is usually more honest when deal sizes vary widely across segments.

Plus or minus 5% at the quarterly segment level is strong for mature B2B SaaS. Plus or minus 10% is acceptable for scale stage. Anything worse than plus or minus 15% means the forecast is unreliable enough that leadership should not plan hiring or spend against it. Enterprise segments tolerate slightly more variance than SMB.

MAPE averages percentage errors equally across segments. WAPE weights the error by the actual dollar size of each segment, so a large-segment miss hurts more than a small-segment one. For B2B SaaS with uneven deal sizes, WAPE is usually the more honest number to publish.

Forecast bias is the systematic tendency of a forecast to consistently come in over or under actuals. A team that beats the forecast four quarters running is biased low (sandbagging), not disciplined. Bias is measured by the signed sum of variances and is often more diagnostic than absolute accuracy.

Forecast by segment rather than blended, tighten stage-gate definitions, use conversion-based weighting instead of gut-feel fields, track bias alongside accuracy, reconcile the forecast weekly, calibrate against four rolling quarters of actuals, and commit to one source of truth. Most teams move from plus or minus 15 percent to plus or minus 5 percent within two quarters of applying these levers.

Tags

forecast accuracyMAPEWAPErevenue forecastingRevOps

Keep reading

Related posts

Ready to see your data clearly?

Stop reporting on last week.
Start acting on this week.

10 minutes to connect. No SQL. No engineering team. Your first dashboard is built automatically.

No credit card required · Cancel anytime · Setup in under 10 minutes