Timothy Solomon
CRM

Lead Scoring is a Projection, Not a Measure

When you assign a number from 0-100 to a lead, you're not measuring something that exists. You're projecting multidimensional reality onto a single axis—and losing everything that matters in the process.

There's a mathematical concept that most CRM practitioners have never heard of, yet it governs everything they do: projection.

When you create a lead score—that number from 0 to 100 that's supposed to tell you how "good" a lead is—you're performing a projection. You're taking a rich, multi-dimensional object (everything you know about this contact and their company) and flattening it onto a single line.

This isn't just a metaphor. It's literally what's happening, and understanding it changes how you should think about lead scoring entirely.

What Projection Actually Means

Imagine you're holding a complex three-dimensional sculpture. You shine a light on it and look at the shadow on the wall. That shadow is a projection—a 2D representation of a 3D object.

Every projection loses information. You can't reconstruct the sculpture from its shadow. Multiple different sculptures could cast the same shadow. The shadow tells you something about the original, but it's fundamentally impoverished.

Now extend this to higher dimensions. Your leads have attributes across many dimensions:

  • Firmographic: company size, industry, revenue, growth rate, employee count
  • Technographic: current tools, tech stack, recent purchases, integrations
  • Behavioral: pages viewed, content downloaded, emails opened, demos attended
  • Intent: search terms, third-party signals, direct requests, competitive research
  • Contextual: timing, budget cycle, org changes, market conditions

Each of these is itself multi-dimensional. "Company size" might include employees, revenue, and locations. "Behavioral" might span dozens of tracked actions.

When you compute a lead score, you're projecting this 20+ dimensional object onto a single number. You're casting a shadow of a hyper-sculpture onto a line.

The Mathematics of Lost Information

Let's be precise about what we lose.

In linear algebra, when you project a vector from n dimensions to 1 dimension, you preserve only the component in the direction of projection. Everything orthogonal to that direction vanishes.

For a lead score, this means:

Preserved: The degree to which this lead aligns with your scoring model's definition of "good."

Lost: Everything that makes this lead different from other leads with the same score.

Two leads with a score of 73 might have gotten there via completely different paths:

  • Lead A: Perfect firmographic fit, zero engagement
  • Lead B: Mediocre firmographic fit, extremely high engagement

These leads require entirely different responses. Lead A needs nurturing—they're a great fit who hasn't discovered you yet. Lead B needs sales engagement—they're highly interested and ready to act despite not being ideal.

But your score can't tell you this. The score flattened away the very information you need.

Why We Still Do It

If projections lose so much information, why do we score leads at all?

Because humans can't process 20-dimensional space directly. We need simplifications to make decisions at scale. A sales rep can't evaluate every attribute of every lead—they need some way to prioritize.

The problem isn't scoring per se. The problem is treating the score as if it were the underlying reality.

A shadow is useful. It tells you something about shape and size. But you wouldn't try to pick up a shadow, or paint it, or sell it as a sculpture. You know it's a projection—a representation, not the thing itself.

Yet that's exactly what we do with lead scores. We treat them as measurements of an underlying property ("lead quality") rather than what they are: projections that deliberately sacrifice information for simplicity.

Lead Scoring vs. Lead Qualification

This brings us to the crucial distinction that most organizations miss entirely.

Scoring answers: "To what degree does this lead match our definition of ideal?"

Qualification answers: "Has this lead met the conditions to advance to the next stage?"

These are fundamentally different types of questions requiring fundamentally different types of answers.

Scoring: Continuous, Projective

  • Produces a number on a scale
  • Answers questions of degree ("how much?")
  • Inherently loses dimensional information
  • Changes gradually as attributes change
  • Useful for prioritization and resource allocation

Qualification: Categorical, Predicative

  • Produces a category (qualified / not qualified)
  • Answers questions of kind ("whether?")
  • Based on defined conditions being met
  • Changes discretely when conditions are satisfied
  • Useful for process management and handoffs

The catastrophic error—one committed by nearly every CRM implementation—is using scoring for qualification.

"A lead becomes an MQL when their score exceeds 65."

This sentence combines two incompatible concepts. It treats a projection (score) as if it could answer a categorical question (qualification). It's like saying "this sculpture becomes a circle when its shadow exceeds 10 inches." The statement isn't even wrong—it's a category error.

What Qualification Actually Requires

Qualification is about meeting conditions. These conditions should be boolean—true or false—not matters of degree.

A lead becomes an MQL when:

  • They have engaged with educational content (yes/no)
  • AND they match at least one target persona (yes/no)
  • AND they are from a target account or market (yes/no)

An MQL becomes an SQL when:

  • They have expressed buying intent (demo request, pricing inquiry, etc.) (yes/no)
  • AND they are not a competitor or unqualifiable entity (yes/no)
  • AND they have been reviewed and accepted by sales (yes/no)

These are predicates—logical conditions that are either satisfied or not. No arbitrary thresholds. No "degree of MQL-ness."

When you define qualification this way, several good things happen:

  1. Handoffs become unambiguous. Marketing and sales agree on exactly what an MQL is because the definition is explicit.

  2. Gaming becomes harder. You can't inflate MQL numbers by adjusting a threshold. The conditions either were met or they weren't.

  3. Diagnosis becomes possible. When conversion rates drop, you can examine which condition is failing. Is it engagement? Intent expression? Sales acceptance?

  4. Pipeline becomes trustworthy. Everyone knows what each stage means because it's defined by conditions, not arbitrary cutoffs.

The Attribution Bundle Problem

There's a deeper issue with projective scoring that deserves its own treatment: the attribution bundle.

Every lead score is implicitly an attribution model. It attributes "quality" to various factors and sums them. But attribution is itself projection—you're taking credit that's distributed across time and touchpoints and projecting it onto individual factors.

Consider: A lead downloads a whitepaper (+10 points), then attends a webinar (+15 points), then requests a demo (+30 points). Their score is now 55+.

But what caused them to request the demo? Was it the whitepaper? The webinar? Were these sequential dependencies (webinar wouldn't have worked without whitepaper) or independent influences?

Standard lead scoring treats these as additive. But human decision-making isn't additive. The webinar might have been useless without the whitepaper setting context. Or the whitepaper might have been sufficient alone, with the webinar adding nothing.

This is the attribution bundle: the collection of touchpoints that together produced an outcome, where individual contributions can't be cleanly separated. Projecting this bundle onto individual scores fundamentally misrepresents causality.

The implications for scoring are significant:

  1. Scores don't represent causal impact. Just because "webinar attendance" adds 15 points doesn't mean webinars cause quality.

  2. Score optimization doesn't optimize outcomes. Maximizing the score for a lead isn't the same as maximizing the probability of conversion.

  3. Historical scores don't predict future behavior. The path that got a lead to 73 doesn't tell you what will get them to 80.

What To Actually Do

Given all this, what's the right approach to lead scoring?

1. Use Scoring for Prioritization, Not Qualification

Scores are great for deciding who to call first when you have limited capacity. They're terrible for deciding who "counts" as an MQL.

Keep scoring. It's useful. But don't use it to gate process transitions.

2. Build Separate Qualification Logic

Create explicit boolean conditions for each stage transition. Document them. Make them visible to all teams.

MQL_CRITERIA:
  - has_content_engagement: boolean
  - matches_target_persona: boolean
  - from_target_market: boolean

is_MQL = has_content_engagement AND matches_target_persona AND from_target_market

3. Preserve Dimensional Information

Instead of (or in addition to) a single score, track the underlying dimensions:

  • Fit score: How well does firmographic/technographic profile match?
  • Engagement score: How much have they interacted?
  • Intent score: How strong are the buying signals?
  • Timing score: How aligned with typical purchase cycles?

These four sub-scores give you much more information than one composite score. You can act differently on high-fit/low-engagement vs low-fit/high-engagement.

4. Separate Health from Score

Lead health is about temporal decay—is this lead going cold? That's a completely different dimension from fit.

A lead can be:

  • High fit, high health (great lead, act now)
  • High fit, low health (great lead going cold, re-engage)
  • Low fit, high health (active but poor fit, disqualify)
  • Low fit, low health (poor fit going cold, ignore)

If you mash health into score, you lose this differentiation.

5. Accept the Information Loss

Finally, accept that any single number will lose information. The solution isn't a better scoring algorithm—it's not expecting scores to do what they can't do.

Use scores as one input among many. Combine them with explicit qualification logic, health tracking, and human judgment. Don't automate decisions based solely on scores.

The Deeper Principle

The distinction between scoring and qualification reflects a deeper epistemological principle: different questions require different types of answers.

"How good is this lead?" is a question of degree. It calls for a scalar answer—a position on a spectrum. Scoring answers this kind of question (imperfectly, as all projections do).

"Is this lead qualified?" is a question of kind. It calls for a categorical answer—membership in a class. Predicates answer this kind of question.

Confusing these types leads to nonsense. You end up with leads that are "somewhat MQL" or "73% qualified"—phrases that sound quantitative but actually mean nothing. An MQL is an MQL, or it isn't. Qualification is categorical.

This matters beyond CRM. The pattern—confusing questions of degree with questions of kind—appears everywhere:

  • "How ethical is this action?" (degree) vs "Is this action permissible?" (kind)
  • "How healthy is this patient?" (degree) vs "Does this patient have diabetes?" (kind)
  • "How good is this candidate?" (degree) vs "Is this candidate qualified?" (kind)

In each case, one question calls for scoring and one calls for predication. Treating them as interchangeable creates confusion.


Conclusion: Projection Has Its Place

Lead scoring isn't bad. Projection isn't wrong. They're tools—and tools have appropriate uses.

Use lead scoring when you need to:

  • Prioritize outreach among many leads
  • Allocate limited resources (who gets the call first?)
  • Identify segments for differentiated treatment
  • Track general trends in lead quality over time

Don't use lead scoring when you need to:

  • Define process stage transitions
  • Determine who "counts" for reporting
  • Make all-or-nothing decisions about handling
  • Explain to sales why a lead is ready (or not)

The lead score is a shadow. It's a projection of something richer and more complex. Like all projections, it loses information—necessarily, inevitably.

The question isn't how to make a perfect score. There's no such thing. The question is how to use imperfect scores wisely, alongside other tools that answer different kinds of questions.

Scoring tells you "how much." Qualification tells you "whether." You need both—but you need them separate.


This essay is part of the CRM Framework series. For implementation in software, see Oblio. For strategic consulting, see Hire Timothy Solomon.

Related Reading: