Sales Forecasting

Forecast Accuracy

2026-04-12 9 min read Sales Forecasting
Forecast Accuracy measures how close a revenue forecast was to actual revenue in a given period. Expressed as a percentage, it quantifies the reliability of your forecasting process. High forecast accuracy means leadership can trust the number for hiring, budgeting, and capacity decisions. Low accuracy means the business is planning on fiction.
TL;DR: Forecast accuracy is 1 minus the absolute forecast error, expressed as a percentage. The median B2B SaaS company achieves 72–78% forecast accuracy at the start of a quarter, improving to 85–92% by the final two weeks (Clari Revenue Insights, 2025).

What is forecast accuracy?

Forecast accuracy (also called forecast precision or revenue forecast accuracy) is the degree to which a predicted revenue figure matches the actual revenue collected in the same period. Sales forecasting teams use it as the primary measure of forecasting process quality — not whether the number was high or low, but whether it was right.

Every B2B company forecasts revenue. The question is whether that forecast is useful. A forecast that consistently misses by 25–30% is worse than no forecast at all, because it gives leadership false confidence. Hiring plans, marketing budgets, and capacity decisions built on an inaccurate forecast cascade into real financial damage — overhiring ahead of revenue that never arrives, or under-investing when growth was actually tracking.

For mid-market B2B SaaS companies ($2M–$30M ARR), healthy forecast accuracy is 80–90% when measured at the quarterly level. Below 70%, the forecast is not guiding decisions — it is a guess dressed up in a spreadsheet. Above 90% consistently, the company likely has a mature pipeline coverage model and clean CRM data.

Forecast accuracy differs from forecast confidence in what it measures. Accuracy is backward-looking: how close was last quarter's forecast to reality? Confidence is forward-looking: how reliable is the current forecast based on pipeline composition and data quality?

Why forecast accuracy matters for operators

When the board asks for next quarter's number, the operator needs to give a figure they believe. If the last three quarterly forecasts missed by 20%, 15%, and 28%, there is no reason to trust the next one. The forecast becomes a negotiation exercise instead of a planning tool.

The financial consequences are specific. A company forecasting $1.8M in quarterly revenue that actually closes $1.35M has over-committed on headcount, marketing spend, and infrastructure by $450K worth of expected income. Those commitments do not reverse easily. Two consecutive misses of this magnitude can force layoffs or an emergency fundraise.

A typical 80-person SaaS company that starts measuring forecast accuracy discovers its quarterly error rate is 18–25%. The error is not random — it clusters around specific deal types, rep performance, and pipeline stages. Once operators identify which deals are dragging down accuracy, they can either improve the inputs (better CRM hygiene, tighter stage criteria) or adjust the model (apply different win rates by stage or deal type).

Forecast accuracy formula

Forecast Accuracy = (1 - |Actual Revenue - Forecasted Revenue| / Actual Revenue) x 100

Example:
Forecasted Revenue: $1,650,000
Actual Revenue: $1,820,000
Error: |$1,820,000 - $1,650,000| = $170,000

Forecast Accuracy = (1 - $170,000 / $1,820,000) x 100 = 90.7%

What each component means:

  • Actual Revenue: The revenue actually collected in the period. Use recognized revenue, not booked pipeline. Deals that closed but have not been invoiced or collected should follow your recognition standard.
  • Forecasted Revenue: The projected number from the start of the measurement period. Use the forecast as of the period start date — not an updated mid-period number. Measuring accuracy against a revised forecast defeats the purpose.
  • Absolute Value: The formula uses absolute error, so under-forecasting and over-forecasting are treated equally. A forecast of $1.5M against $1.8M actual (under) and $2.1M against $1.8M actual (over) both produce the same 16.7% error rate.

Variant: Weighted Forecast Accuracy

Some teams weight accuracy by deal segment: enterprise, mid-market, and SMB. This prevents a flood of small, predictable deals from masking poor accuracy on large, volatile enterprise deals. Calculate accuracy per segment, then weight by revenue contribution.

Forecast accuracy benchmarks by company type

How forecast accuracy varies across B2B company segments. Measured at the quarterly level, forecast set at start of quarter.

SegmentGoodAverageBelow AverageAction Needed
Early-stage SaaS (<$1M ARR)70–80%55–69%Below 55%Small deal volume makes accuracy volatile; focus on pipeline coverage instead
Growth SaaS ($1–10M ARR)80–88%70–79%Below 70%Audit win rates by stage; remove stale deals from forecast; enforce CRM hygiene
Scale SaaS ($10M+ ARR)85–93%75–84%Below 75%Segment forecasts by deal type; apply different close rates per segment; use confidence-weighted models
B2B Services / Agencies75–85%60–74%Below 60%Project-based revenue is inherently harder to forecast; track backlog separately from new business pipeline

Sources: Clari Revenue Insights 2025, InsightSquared Forecast Benchmark Report 2025, Pavilion COO Survey 2025, industry-observed ranges based on operator reports.

Common mistakes when measuring forecast accuracy

1. Measuring accuracy against a revised mid-quarter forecast

If you update the forecast at week 6 and then measure accuracy against that revised number, you are testing your ability to read the last 4 weeks — not your ability to predict a quarter. Always measure against the original forecast set at the start of the period. Revisions are useful for planning, but accuracy must be scored against the initial call.

2. Using pipeline-weighted forecast as the baseline

A pipeline-weighted forecast applies historical win rates to current pipeline value. It is a useful model, but it is not the same as a committed forecast. Measuring accuracy against a mechanistic pipeline calculation tells you whether your win rates held — not whether your forecast was right. Track both, but report them separately.

3. Treating 100% accuracy as the target

A forecast that is exactly right every quarter may mean the team is sandbagging — deliberately forecasting low to guarantee a hit. Healthy forecasting has some variance. The target is a tight range (80–90%), not perfection. Consistent 95%+ accuracy at a growth-stage company deserves scrutiny, not celebration.

4. Not segmenting accuracy by deal type or rep

An overall accuracy of 82% can hide a pattern: SMB deals forecast at 95% accuracy while enterprise deals forecast at 60%. Aggregate accuracy masks the segment where the forecasting process is broken. Segment by deal size, rep, and source channel to find the real problem.

5. Ignoring direction of error

A team that consistently under-forecasts by 15% has a different problem than one that swings between +20% and -20%. Consistent directional bias means the model or assumptions are systematically off. Variable error means the inputs (pipeline data, stage definitions) are unreliable. The fix is different for each.

How Fairview tracks forecast accuracy automatically

Fairview's Forecast Confidence Engine calculates forecast accuracy automatically at the end of each period by comparing the original forecast to actual closed revenue pulled from your CRM and payment processor. No manual reconciliation between Salesforce and Stripe required.

The dashboard shows accuracy trending over time — quarterly, monthly, and by deal segment. You can see whether accuracy is improving as your process matures or degrading as pipeline volume grows. Fairview also breaks accuracy down by rep and deal type, so you can pinpoint exactly where the forecast breaks.

When the current quarter's forecast is set, Fairview assigns a forecast confidence score based on pipeline composition, historical accuracy by segment, and CRM hygiene completeness. This forward-looking score helps you gauge whether this quarter's number is more or less reliable than last quarter's.

See how Forecast Confidence Engine works

Forecast accuracy vs forecast confidence

People often confuse forecast accuracy with forecast confidence. They answer different questions.

Forecast AccuracyForecast Confidence
What it measuresHow close past forecasts were to actual revenueHow reliable the current forecast is based on pipeline quality
When to use itEnd-of-period review; improving the forecasting processMid-quarter planning; assessing risk in the current number
Key differenceBackward-looking: was the last forecast right?Forward-looking: is this forecast trustworthy?
Who tracks itOperators, CFOs, board-level reportingRevOps, sales managers, weekly operating review

Forecast accuracy tells you whether your process works. Forecast confidence tells you whether to trust the number sitting in front of you right now. Track both. Accuracy improves the model over time. Confidence guides decisions this quarter.

FAQ

What is forecast accuracy in simple terms?

Forecast accuracy measures how close your revenue prediction was to the actual revenue collected. If you forecast $1.65M and closed $1.82M, your forecast accuracy was 90.7%. It tells you whether your forecasting process is reliable enough to base hiring, budgeting, and capacity decisions on.

What is a good forecast accuracy for B2B SaaS?

For growth-stage B2B SaaS ($1–10M ARR), 80–88% quarterly forecast accuracy is considered good. Below 70% means the forecast is not reliably guiding decisions. Above 90% consistently is strong but worth validating — it may indicate sandbagging. Early-stage companies naturally have more variance due to smaller deal volumes.

How do you calculate forecast accuracy?

Subtract the absolute difference between actual and forecasted revenue from 1, then multiply by 100. The formula: (1 - |Actual - Forecast| / Actual) x 100. Using the absolute value means under-forecasting and over-forecasting are treated equally. Always measure against the original forecast set at the period start, not a revised number.

What is the difference between forecast accuracy and forecast confidence?

Forecast accuracy is backward-looking — it scores how close past forecasts were to actual results. Forecast confidence is forward-looking — it assesses how reliable the current forecast is based on pipeline quality, data completeness, and historical patterns. Accuracy improves your process. Confidence guides decisions right now.

How often should you measure forecast accuracy?

Quarterly, with monthly tracking as a leading indicator. Quarterly accuracy is the standard board-level metric. Monthly tracking catches drift earlier — if accuracy drops in month 1 of the quarter, you can adjust inputs (tighten stage criteria, clean pipeline) before the quarter ends. Weekly measurement is too granular for accuracy but appropriate for confidence.

What causes poor forecast accuracy?

The most common causes: stale pipeline data (deals with outdated close dates and stages), inconsistent win rate assumptions across deal types, reps committing deals that are not truly in late-stage pipeline, and not adjusting for seasonal patterns. Improving CRM hygiene typically produces the fastest accuracy gains because it fixes the data the model reads.

Related terms

  • Forecast Confidence — a forward-looking score that quantifies how reliable the current forecast is, based on pipeline composition and data quality.
  • Sales Forecast — a projected revenue estimate for a future period, derived from pipeline data, historical close rates, and rep inputs.
  • Pipeline Coverage Ratio — the ratio of total weighted pipeline to quota or revenue target, expressed as a multiple.
  • Bottom-Up Forecast — a forecasting method that builds the revenue estimate from individual deal-level data rather than top-down assumptions.
  • Win Rate — the percentage of qualified opportunities that reach closed-won, used as a key input to most forecasting models.

Fairview is an Operating Intelligence Platform that tracks forecast accuracy automatically alongside forecast confidence, pipeline coverage, and sales velocity. Start your free trial →

Siddharth Gangal is Founder at Fairview. He has spent the past decade building revenue operations systems for B2B SaaS companies from seed stage through Series C.

Ready to see your data clearly?

Stop reporting on last week.
Start acting on this week.

10 minutes to connect. No SQL. No engineering team. Your first dashboard is built automatically.

See your data in Fairview Start 14-day free trial

No credit card required · Cancel anytime · Setup in under 10 minutes