
When Mark Roberge joined HubSpot as its fourth employee and founding CRO, he made a choice that most sales leaders resist: he refused to run the quarterly forecast on rep judgment.
Instead of asking his team what they thought would close, Roberge built quantifiable models around behavioral data like stage conversion rates, time-in-stage benchmarks, and engagement signals. When a rep's gut-feel commit diverged from what the model predicted, the model was usually right. The results were hard to argue with: revenue grew 6,000% during his nine-year tenure, the customer base scaled from 1 to 12,000, and HubSpot ranked #33 on the Inc. 500 fastest-growing companies list.
Roberge has since documented the same pattern across hundreds of startups at Stage 2 Capital: the single most common reason revenue acceleration fails is that leaders mistake intuition for signal. They forecast from what reps say, not from what's actually happening in the pipeline. Gartner estimates that even by 2025, 90% of B2B enterprise sales organizations will continue to rely on intuition rather than advanced data analytics for forecasting.
This guide covers what sales forecasting is, why it breaks down, and how to build a process that produces numbers you can actually defend.
What Is Sales Forecasting?
Sales forecasting is the process of estimating future revenue over a defined period (typically a quarter or fiscal year) based on pipeline data, historical performance, and market conditions.
For B2B sales teams, an accurate forecast is the foundation of every major business decision: hiring, budgeting, board reporting, and go-to-market planning. When the forecast is reliable, the entire organization can plan with confidence. When it isn't, decisions get made on guesswork. And the costs compound across every department.
Most revenue orgs have a forecasting process. The problem isn't the absence of a process. It's that the process is built on unreliable inputs. 4 in 5 sales and finance leaders missed at least one quarterly forecast in the past year, with over half missing it two or more times. Nearly all of them (97%) acknowledged that better tools and data would make their forecasts more accurate. The problem is widely recognized. It's just not being fixed.
Why Sales Forecasting Is Harder Than It Looks
Most forecasting breakdowns aren't caused by bad math or poor CRM hygiene alone. They're built into the way most teams think about what a forecast is.
CRM stage ≠ deal health. Stage labels reflect what reps believe is happening, not what's actually happening behaviorally. A deal sitting in "Proposal Sent" looks identical in the CRM whether the prospect responded to the proposal yesterday or hasn't opened an email in three weeks. The number is the same. The risk is not.
Think about what Roberge's behavioral model would have flagged. Say a rep committed a deal because they had a good discovery call, but the CRM showed no subsequent stakeholder engagement, no follow-up activity, and a close date that had already slipped twice. The model would discount it. The rep's gut wouldn't.
Gut feel doesn't scale. A VP of Sales managing a 5-person team can read her pipeline personally. She knows every deal, every relationship, every red flag. When the team grows to 20 reps with 30 open deals each, that intuition disappears. She's now forecasting based on 30-minute update calls where reps tell her what they want her to hear, not what's actually in the data.
Commit ≠ Close. The gap between what reps call "committed" and what actually closes is where accuracy dies. 54% of deals forecast by reps never close. Most teams don't track this gap systematically, which means they can't improve it.
Recency bias inflates optimism. Reps call deals based on their most recent conversation, not the full behavioral arc of the opportunity. A promising call last Tuesday can make a deal look far healthier than it is.
The data is stark: only 10% of sales activities are captured in the CRM in most companies. If that's the input, the output shouldn't surprise anyone. Fewer than 50% of sales leaders report high confidence in their own organization's forecast accuracy. And yet, the forecast is the number underlying every major decision.
Sales Forecasting Methods
There's no single right method for B2B sales forecasting. Most teams use some combination of the approaches below, and which ones work best depends on data maturity, team size, and sales cycle complexity.
Historical / Trend-Based Forecasting
This method uses past performance to project forward. It's the simplest place to start and reasonable for teams with clean historical data. Its weakness is that it's blind to what's actually happening in the current pipeline. A great Q3 last year tells you nothing about whether the deals currently in stage 4 are real.
Opportunity Stage / Pipeline-Weighted Forecasting
Weighted Pipeline is the default method in most CRMs. Each stage gets a probability assigned to it (e.g., Discovery = 20%, Proposal = 50%, Negotiation = 80%), and the forecast is the sum of deal values multiplied by stage probabilities. It's intuitive, but accurate only if stage definitions are consistently applied across the team. As we've established, they almost never are.
Bottom-up Forecasting
This method aggregates individual rep commits. Each rep calls their own deals, and the number rolls up through the org. This is the most common method for mid-market B2B sales teams. It's also only as good as rep judgment, which is the problem Roberge was solving when he replaced it with behavioral models at HubSpot.
Multivariable / AI-Driven Forecasting
This method incorporates behavioral signals alongside stage data: engagement frequency, stakeholder breadth, response latency, historical win rates by deal type, rep-specific accuracy patterns. This is the most accurate forecasting approach—and the hardest to implement without the right infrastructure.
The progression from historical → stage-weighted → bottom-up → behavioral follows a team's data maturity curve. Most B2B SaaS companies in the $10M–$100M ARR range are stuck at stage-weighted or bottom-up. The teams that hit consistent forecast accuracy (world-class is defined as 90%+ accuracy within the first 30 days of a quarter) are the ones that make the shift to behavioral inputs.
What Actually Goes Into an Accurate Sales Forecast
The forecasting method is almost secondary to the quality of the inputs feeding it. Here's what separates forecasts that hold from forecasts that don't.
Pipeline Coverage
How much pipeline do you need to hit quota? The standard rule of thumb for B2B SaaS is 3–4x pipeline coverage at typical win rates. But coverage quantity and coverage quality are not the same thing.
A pipeline of 40 deals where half haven't had buyer-side activity in 30 days is not the same as a pipeline of 20 deals where every opportunity has an active champion and a defined next step. The first inflates the number. The second tells you something real.
Let’s apply this to the HubSpot model. Roberge's team didn't just count pipeline value, they weighted it against behavioral signals. A deal at 3x nominal coverage might have been 1.5x effective coverage once stalled opportunities were discounted. That's the difference between a confident forecast and a hopeful one.
Win Rate by Segment, Not Blended
A blended win rate hides as much as it reveals. If your team closes 25% of opportunities overall, that number might be masking a 38% rate on SMB deals and an 11% rate on enterprise—two completely different businesses with completely different forecasting implications.
Track win rate by deal size tier, by rep, by product line, by lead source, and by sales cycle stage. The variance across these dimensions tells you where the forecast is reliable and where it's a guess.
Average Sales Cycle Length
Every deal in your pipeline has an implied close date. Some of those close dates are realistic. Many are not.
Deals that have been open significantly longer than your average cycle for their stage are overdue. These deals are more likely to slip, die quietly, or get pushed to next quarter. The pipeline review should flag these systematically, not leave it to individual reps to self-report.
Behavioral Signals: the Accuracy Unlock
Behavior is where forecast accuracy separates from forecast aspiration.
Behavioral signals are the patterns of activity that actually predict whether a deal will close: how often the prospect is responding, how many stakeholders have been engaged, when the last meaningful two-way interaction occurred, whether the economic buyer has been looped in, how the deal's activity pattern compares to historical wins at the same stage.
These signals predict close probability more reliably than any stage label or rep-assigned probability. A deal in "Negotiation" where the champion hasn't responded in two weeks and only one stakeholder has ever been contacted is a stalled deal with a misleading label.
This is exactly what Roberge's models captured. Rather than asking reps "where does this deal stand?" he asked the data: is the prospect engaging? Are the right people in the room? Is the pace consistent with how HubSpot's historical wins moved? If the behavioral pattern didn't match, they discounted the deal, regardless of what the rep said.
Most teams track these signals manually, inconsistently, or not at all. That gap is the primary driver of forecast error.
How to Run a Sales Forecast Call
The regular forecast call is where the process either holds together or falls apart. This is the practice that separates a disciplined forecast review from a political exercise:
1. Start with the math, not opinions. Before anyone says a word, put the coverage math on the table: pipeline value × weighted win rate = expected revenue. If the math doesn't support the number someone is about to commit, the number is wrong. Work backward from the math, not forward from rep optimism.
2. Inspect deals against behavioral criteria, not stage labels. For every deal in the committed category: what happened in the last two weeks? Who responded? Who hasn't? Is there an active champion? When's the next scheduled interaction? These questions expose the gap between what the CRM says and what's actually happening.
3. Define your categories and enforce the definitions. Commit, Most Likely, Pipeline, Upside. Each category should have a team-agreed definition based on buyer actions, not rep confidence. A Commit deal should require specific evidence: a champion who has confirmed budget, an economic buyer who is engaged, a defined close plan. Without defined criteria, every rep fills in their categories differently.
4. Track forecast vs. actual every quarter. Without a variance log, forecasting is superstition. Which reps consistently beat their commits? Which ones consistently miss? Which deal categories are the most reliable? Which are the most optimistic? This institutional knowledge is what makes forecasting improve over time, and it only accumulates if you document it.
5. Discount explicitly for known risk. Slip candidates, single-threaded deals, and stalled opportunities with no recent activity should be explicitly discounted in the forecast. The discipline is in the adjustment, not in taking them at face value and hoping they'll close anyway.
97% of sales leaders believe forecasting would improve with better collaboration between Sales and Finance. But 35% say their top barrier is that the process takes too long and isn't collaborative. A structured forecast call process addresses both.
How to Measure and Improve Forecast Accuracy
Most teams know their forecast is off. Fewer track by exactly how much, and fewer still use that variance data to get better.
The forecast accuracy formula:
Forecast Accuracy = (Actual Revenue ÷ Forecasted Revenue) × 100
A result of 95% means you were 5% off. World-class is 90%+ accuracy within the first 30 days of a quarter. Most B2B sales teams are operating at 60–75%.
The benchmark gap is stark: teams using data-driven forecasting lift quota attainment rates by roughly 20 percentage points, compared to intuition-led teams. The difference isn't method; it's inputs. This is the same lesson Roberge was demonstrating at scale: when you replace gut feel with behavioral data, your whole team gets better at executing against what the data is telling them.
Track accuracy at multiple levels. Overall team accuracy is a useful headline number. Accuracy by rep, by deal tier, and by forecast category tells you where the problems actually live. A rep who consistently over-commits in the Upside category needs different coaching than one whose Commit category is unreliable.
The Most Common Accuracy Improvement Levers
- Tighter stage exit criteria: require buyer actions, not just seller activities, to advance a deal
- Behavioral signal tracking at the deal level: engagement recency, stakeholder breadth, response patterns
- Defined forecast categories with explicit, written criteria
- Weekly or bi-weekly variance reviews to identify patterns early
Platforms that automatically track behavioral signals across deals (engagement, stakeholder access, response latency) remove the manual gap in the inputs. That's why teams using behavioral forecasting consistently hit 3–5% variance, while teams relying on CRM stage data alone languish at 8–15%. Chief tracks these signals automatically, connecting deal-level behavioral data directly to the forecast so the number reflects what's actually happening.
How Accurate Is Your Team's Forecast?
Use the Forecast Accuracy Calculator to calculate your team's current variance, benchmark it against industry standards, and identify which inputs are creating the most error.
Sales Forecasting Tools
There are dozens of tools you can use for sales forecasting. Most of them fit into three main categories:
Spreadsheet-based forecasting is where most early-stage teams start. It's accessible and flexible. These tools eventually break under scale: manual updates, version control problems, no behavioral data, no signal layer. Roberge quickly outgrew it at HubSpot—a spreadsheet can't process engagement signals across hundreds of reps and thousands of deals.
CRM-native forecasting in tools like Salesforce andHubSpot is the default for most mid-market teams. The cost to use it is low. Weighted pipeline probability usually comes built in. The accuracy ceiling of a CRM-native tool is the quality of your stage definitions and the discipline of your reps' data entry. Without those, you're automating guesses.
Dedicated forecasting platforms sit on top of your CRM and add a signal layer: behavioral data, AI-driven deal scoring, rep-level accuracy benchmarking, variance tracking over time. These are the tools that close the accuracy gap because they're analyzing better inputs.
Sellers who effectively partner with AI tools are 3.7x more likely to hit quota than those who don't. The mechanism is the same as better forecasting: better signals, not better arithmetic.
Sales Forecasting FAQ
What is sales forecasting?
Sales forecasting is the process of estimating how much revenue a sales team will generate over a defined period, typically a quarter or year. It draws on pipeline data, historical win rates, deal-level signals, and market conditions to produce a number the business can plan around.
What is the difference between a sales forecast and a sales projection?
A sales forecast is an operationally grounded estimate based on current pipeline data and historical performance. A sales projection is typically a higher-level financial modeling exercise (often used in planning or investor contexts) that applies growth assumptions to historical trends without grounding in live deal data. In B2B sales operations, forecast is the more precise term.
What are the main sales forecasting methods?
The four primary methods are historical/trend-based, opportunity stage/pipeline-weighted, bottom-up, and multivariable/AI-driven. Most teams use a combination, and accuracy tends to improve as teams move toward behavioral inputs.
How do you calculate a sales forecast?
The most common formula is the sum of (deal value × stage probability) across all active opportunities. More sophisticated approaches weight deals based on behavioral signals (engagement recency, stakeholder breadth, historical win rates for similar deals) rather than relying solely on stage probability.
What is a good sales forecast accuracy percentage?
World-class is 90%+ accuracy within the first 30 days of a quarter. Most B2B sales teams operate at 60–75%. Teams using behavioral data as a forecasting input consistently hit 85–95% accuracy.
Why is sales forecasting important?
Every major business decision downstream depends on forecasting: headcount planning, marketing spend, product investment, board reporting, and investor confidence. Over half of sales leaders miss their quarterly forecast at least twice a year; the organizational consequences of that extend well beyond the sales team.
How often should you update a sales forecast?
At minimum, monthly. High-performing teams typically review weekly. The cadence matters less than the discipline. Each review should compare prior forecasts against actual outcomes and update the current forecast based on changes in deal activity, not rep sentiment.
What data do you need to forecast sales?
At baseline: pipeline value by stage, historical win rates by stage, average sales cycle length, and deal close dates.
For more accurate forecasting: behavioral signals at the deal level (engagement frequency, stakeholder breadth, response latency), rep-level accuracy history, and segment-level win rates by deal size, product line, and lead source.
What's the difference between bottom-up and top-down forecasting?
Bottom-up forecasting aggregates individual rep commits upward through the organization. Top-down forecasting starts with an overall revenue target and allocates it downward based on capacity and historical performance. In B2B sales, bottom-up is more common and more granular—though only as accurate as the quality of individual rep judgment.
How does AI improve sales forecasting?
AI improves forecasting primarily by processing behavioral signals at a scale humans can't. Rather than relying on rep-reported stage data, AI-driven forecasting ingests engagement data, stakeholder activity, response patterns, and historical outcomes to produce deal-level probability scores that reflect what's actually happening. Sellers who use AI tools for forecasting are 3.7x more likely to hit quota. The accuracy improvement is one mechanism of that leap.
The Bottom Line: Good Data In, Good Forecast Out
Sales forecasting is primarily an inputs problem. Most teams are forecasting from stage labels that don't mean anything, rep commits built on optimism, and CRM data that captures maybe 10% of what's actually happening in their deals.
Roberge figured this out early at Hubspot. The behavioral model wasn't about distrusting his reps, it was about giving them a more honest mirror. When the data showed a deal was stalling, it became a coaching conversation, not a blame game. Quarter after quarter, the number got more predictable, not less.
Better sales forecasting requires behavioral signals that reflect what's actually happening at the deal level, tracked consistently, and used to inform both the forecast and the conversations that build it. That’s why we built Chief to automatically log CRM updates and surface deals earlier. When you have access to this data, your forecast improves, and the entire revenue org benefits.
Schedule a demo today to see how Chief helps revenue orgs improve forecast accuracy.



