IN THIS ISSUE 🌱

Good Morning {{first_name}}!

Malene here.

This week, we are talking about one of the most expensive misalignments in B2B lifecycle marketing: the MQL that nobody trusts. Marketing calls a lead qualified. Sales calls the same lead and finds out they downloaded one whitepaper out of curiosity and have absolutely no purchase intent or budget authority.

The sales rep stops trusting the marketing team's leads. Marketing stops getting useful feedback from sales. The pipeline report looks busy while the revenue results say otherwise. This is the lead scoring problem, and it is more common than anyone wants to admit.

We are going to fix it by separating fit from intent, weighting signals correctly, and building a model that your sales team will actually believe in. Also: a visit to the careers page is not a buying signal. Please deduct points accordingly.

Let’s dive in.

COMPANIES USING PROPERLY STRUCTURED LEAD SCORING SEE A 77% INCREASE IN ROI COMPARED TO THOSE THAT DO NOT

LET’S EXAMINE THE ISSUE
That number from Landbase's lead scoring research is significant.

But the word "properly" is doing a lot of work in that sentence. Lead scoring that assigns equal weight to every click, treats a whitepaper download the same as a pricing page visit, and has no negative scoring logic is not proper lead scoring. It is a point accumulation system that rewards volume over intent.

The 77% lift comes from the teams that have built their scoring models around actual predictors of conversion, not from the teams that are just counting digital interactions and calling the high scorers qualified.

ENGAGEMENT IS A VANITY METRIC UNLESS IT IS TIED TO A PROPENSITY TO BUY 🌊

WHAT YOU MAY BE SEEING
Lead scoring can fool you.

Here is the version of this problem that plays out in almost every B2B CRM audit I have ever run. The lead scoring model assigns points for email opens, link clicks, content downloads, and website visits. A lead accumulates enough points to hit the MQL threshold and gets routed to sales. Sales calls the lead and discovers that the person opened several emails because the subject lines were interesting, downloaded a whitepaper because the topic was tangentially relevant to a project they are working on, and has no budget, no timeline, and no decision-making authority.

The sales rep marks the lead as not qualified and moves on. Marketing sees the rejected MQL and assumes the sales team is not following up properly. The real problem is that the scoring model was measuring activity rather than intent, and those are not the same thing.

A lead who visited your pricing page three times in one week has demonstrated more genuine intent than a lead who opened twelve emails over three months and never clicked anything. A CEO from a 500-person company in your target industry who opened one email is worth more pipeline investment than a student who clicked every link in your nurture sequence. Your scoring model needs to reflect these distinctions, or your MQL pipeline will consistently disappoint the sales team until they stop trusting it entirely.

Acquisition fills the bucket. But a lead scoring model that cannot distinguish between curiosity and buying intent is routing the wrong contacts to sales and leaving genuinely qualified leads unaddressed in the nurture queue.

THE INTENT-FIT MATRIX IS HOW YOU BUILD A SCORING MODEL SALES WILL ACTUALLY BELIEVE

GET STRATEGIC ABOUT FIXING IT
The fix for broken lead scoring is not more points or a higher MQL threshold.

It is a fundamentally different architecture that separates two distinct dimensions of lead quality and scores them independently.

FIT IS WHO THEY ARE. INTENT IS WHAT THEY DO. Fit describes the firmographic and demographic profile of the contact relative to your ideal customer profile. Company size, industry, job title, seniority level, and budget authority are all fit signals. A contact who matches your ICP on these dimensions is a high-fit lead regardless of their current behaviour. Intent describes the behavioural pattern of the contact relative to purchase decision signals. Pricing page visits, ROI calculator usage, product demo requests, and comparison content engagement are all intent signals. A contact demonstrating these behaviours is showing buying intent regardless of their firmographic profile.

The four quadrants that result from this framework drive four completely different responses from your CRM. A high-fit, high-intent lead should be routed to sales immediately. A high-fit, low-intent lead needs nurture content designed to build readiness rather than a sales call they are not ready for. A low-fit, high-intent lead needs more qualifying information gathered before sales time is invested. A low-fit, low-intent lead should stay in a light nurture track with minimal investment until something changes.

THE POINT WEIGHTING THAT ACTUALLY PREDICTS REVENUE: Not all actions should receive equal scoring weight, and the weighting should be calibrated against what your closed-won deals actually looked like. A pricing page visit should be worth ten to fifteen times more than a blog post read. A demo request should be worth fifty points. A careers page visit should deduct points because it signals job seeking rather than buying interest. An email open, given the privacy pre-loading issues discussed in a previous issue, should receive minimal or zero weight. The point values in your model should reflect the actual correlation between each action and conversion, not a gut feeling about what sounds significant.

NEGATIVE SCORING IS NOT OPTIONAL: A lead scoring model without negative scoring is incomplete. Every signal that indicates a contact is unlikely to be a buyer needs to actively reduce the score so they do not accumulate points through high-volume, low-intent activity. Job titles like student, intern, researcher, or consultant on a personal account are common negative indicators. Careers page visits beyond a single view. Unsubscribe page visits. Competitor domain email addresses. The specific negative signals that matter for your business should come from your sales team because they know which contact profiles waste their time. Ask them directly, build those signals into your model, and recalibrate quarterly.

SCORE DECAY IS THE MECHANISM MOST TEAMS FORGET: A lead who was highly engaged six months ago and has been silent since is not warm. They are a stale lead with an artificially inflated score. Score decay logic automatically reduces a contact's score over time when no new engagement is recorded, which ensures that your MQL threshold reflects current intent rather than historical activity. Without decay, your high-scoring segment fills up with contacts who were interested once and have moved on, and the threshold becomes meaningless as a signal of current readiness.

PULL YOUR LAST TEN CLOSED-WON DEALS AND REVERSE-ENGINEER YOUR SCORING MODEL THIS WEEK 🧪

THE PLAY
Let’s go pull some contact records.

Pull the contact records for your last ten closed-won deals and look at the first high-intent action each contact took before eventually converting. Was it a pricing page visit? A demo request? A specific content download? Then look at what their lead score was at the point of first meaningful sales contact.

If the scores are inconsistent or if several of these contacts were never technically MQLs under your current model, your threshold and weighting are miscalibrated.

Adjust the weight of the action that most reliably preceded conversion in your closed-won data, and verify that your current MQL threshold would have flagged these contacts before they found their way to sales through other means.

CLOSING THE LOOP

Lead scoring is the strategic filter that determines whether your sales team spends their time on prospects who are ready to buy or on contacts who are just mildly interested.

A model built on fit and intent rather than raw engagement volume, weighted against actual closed-won behaviour rather than assumed point values, and maintained with decay logic and negative scoring, is the difference between a pipeline report that reflects real opportunity and one that makes everyone feel busy while revenue stays flat.

Stop counting. Start weighing. Your sales team and your quarterly numbers will both notice the difference.

P.S.

Does your current lead scoring model include negative scoring, and does it have any decay logic built in? And when did you last calibrate the point weights against your actual closed-won data?

Hit reply and tell me where your model currently sits. The gap between how most teams have their scoring configured and what actually predicts conversion is significant, and I want to build a full calibration framework issue around what people find when they go and look properly.

Until next Tuesday,
Ships every Tuesday.

Keep Reading