Attribution as Observer Physics
When you ask 'which touchpoint caused this conversion?' you're not asking about an objective fact. You're asking an observer-dependent question—and the answer changes based on how and when you observe.
Every marketing attribution debate eventually reaches an impasse.
"First-touch attribution shows SEO drives our pipeline." "But last-touch shows sales demos are the key." "Multi-touch says it's actually content marketing." "Linear attribution says everything matters equally."
Who's right? The uncomfortable answer: they all are, and none of them are.
Attribution isn't like measuring the length of a table. The table has a definite length; we just have to measure carefully. Attribution doesn't work this way. The "correct" attribution doesn't exist independently of the observer, the model, and the moment of observation.
This isn't a flaw in our methods. It's a feature of the underlying reality. And once you understand why, you'll never think about attribution the same way.
The Quantum Inspiration
Before you dismiss this as metaphor-stretching, let me be precise about the parallel I'm drawing.
In quantum mechanics, certain properties of particles don't have definite values until they're measured. An electron doesn't have a definite position while it's not being observed—it exists in a superposition of possible positions. When you measure, the superposition "collapses" into a definite value.
Critically: the measurement itself affects what you find. Different measurement setups yield different results, and there's no "true" underlying value independent of measurement.
Attribution has the same structure.
A customer's journey doesn't have a single "true" cause until you apply an attribution model. The model is the measurement. Different models yield different results. And there's no model-independent truth about "what really caused the conversion."
This isn't because we lack data or because our models are imperfect. It's because causation in complex systems is observer-dependent.
Why Causation Is Observer-Dependent
Let's think carefully about what we mean by "cause."
A customer converted after the following sequence:
- Saw a display ad
- Clicked an organic search result
- Downloaded a whitepaper
- Attended a webinar
- Requested a demo
- Had a sales call
- Converted
Which touchpoint "caused" the conversion?
The counterfactual test: What would have happened if we removed this touchpoint? But this depends on the context:
- If we removed the display ad, would they still have searched? Maybe.
- If we removed the webinar, would they still have requested a demo? Maybe not—the webinar might have been crucial context.
- If we removed the sales call, would they have converted anyway? Probably not.
Every touchpoint passes the counterfactual test under some assumptions and fails under others. The "cause" depends on which counterfactual you're considering.
The temporal test: Did earlier touchpoints create conditions for later ones? Almost certainly yes. The display ad created awareness. The search satisfied curiosity. The whitepaper provided education. The webinar built trust. Each was necessary for what followed.
But if each was necessary, then all are causes. And if all are causes, "first touch" and "last touch" are equally valid framings.
The intervention test: If we could rerun the experiment and change one thing, what would have the biggest impact? This is the most actionable framing, but it's fundamentally about future interventions, not past causation.
Each test gives different answers. There's no neutral test that reveals the "true" cause.
The Observer Problem
Here's where it gets philosophically interesting.
When we apply an attribution model, we're not passively observing causation. We're actively constructing it. The model defines what counts as a cause, how causes combine, and how credit should flow.
First-touch attribution implicitly assumes: awareness is the hardest problem. Once someone knows about you, downstream conversion follows with high probability. Therefore, credit goes to the awareness-creating event.
Last-touch attribution implicitly assumes: conversion intent is the hardest problem. Many people are aware but don't convert. The thing that tips them over is most valuable. Therefore, credit goes to the conversion-triggering event.
Linear attribution implicitly assumes: all touchpoints contribute equally. The journey is what matters, not any single step. Therefore, credit spreads evenly.
Time-decay attribution implicitly assumes: recent touchpoints matter more because they're closer to the decision. Earlier touchpoints fade in relevance. Therefore, credit weights toward recency.
None of these are "wrong." Each embeds a theory about how marketing works. The model encodes assumptions. The output reflects those assumptions.
You don't discover attribution. You construct it.
The Heisenberg Analogy
In quantum mechanics, Heisenberg's uncertainty principle says you can't simultaneously know both the position and momentum of a particle with arbitrary precision. Measuring one disturbs the other.
Attribution has an analogous uncertainty:
You can't simultaneously know the value of a touchpoint AND the counterfactual world where it didn't exist.
To know a touchpoint's value, you need to compare: world with touchpoint vs. world without. But you can only observe one world—the one where the touchpoint existed. The counterfactual world is forever inaccessible.
You can model the counterfactual world (this is what attribution models do). But the model is a construction, not an observation. And different models construct different counterfactuals.
Furthermore: the act of running marketing campaigns changes the thing you're trying to measure. If you discover that webinars have high attribution, you run more webinars. This changes the baseline. What "caused" conversions last quarter might not cause them next quarter, precisely because you acted on the attribution.
The measurement affects the system. The observer is entangled with the observed.
Multi-Touch as Superposition
Here's a frame that might help: think of a pre-conversion customer as existing in a superposition of influenced states.
They've been exposed to your display ad, and your organic content, and your webinar, and your sales team. These exposures don't exist in sequence (even if they occurred in sequence)—they coexist in the customer's mind as a cloud of influences.
When the customer converts, they don't think "the webinar convinced me" or "the sales call convinced me." They think "yeah, I'm ready to buy." The influences are integrated, not itemized.
Attribution models "collapse" this superposition into discrete credit assignments. First-touch collapses onto the earliest influence. Last-touch collapses onto the most recent. Multi-touch distributes the collapse across touchpoints.
The collapse is necessary for decision-making—you need to allocate budget somewhere. But the collapsed state isn't the "truth." It's a projection of something richer onto something tractable.
The Relational Interpretation
In some interpretations of quantum mechanics—notably relational quantum mechanics—there's no absolute state of a system, only states relative to observers. What's true for one observer might not be true for another, and neither is wrong.
Attribution might be relational in this sense.
For the CFO: last-touch attribution makes sense. They want to know what closed deals. The final touchpoint is most relevant to revenue attribution.
For the brand team: first-touch attribution makes sense. They want to know what creates awareness. The opening touchpoint is most relevant to brand attribution.
For the content team: engagement-weighted attribution makes sense. They want to know what content performs. The most-engaged touchpoints are most relevant to content strategy.
Each observer has a legitimate perspective. Each gets a different "truth." And there's no meta-observer who sees the "real" truth above all these perspectives.
This isn't relativism—it's perspectivalism. The perspectives aren't arbitrary; they're grounded in different legitimate concerns. The CFO isn't wrong, and neither is the brand team. They're measuring different things because they're trying to answer different questions.
Implications for Practice
If attribution is observer-dependent, how should we actually approach it?
1. Choose Models Based on Decisions, Not Truth
Don't ask "which attribution model is correct?" Ask "what decision am I trying to make, and which model supports it?"
Decision: Where to cut budget? Use last-touch or time-decay. You want to see what's currently driving conversions, not what created awareness historically.
Decision: Where to invest in brand? Use first-touch or position-based. You want to see what opens doors, not what closes them.
Decision: How to design customer journeys? Use multi-touch or linear. You want to see the full path, not just the endpoints.
The model is a tool. Use the right tool for the job.
2. Run Multiple Models Simultaneously
Don't commit to a single attribution model. Run several in parallel and look at the pattern.
If first-touch, last-touch, and multi-touch all point to the same channels, you have convergent evidence. If they diverge dramatically, you have observer-dependent variation—and that itself is informative.
The disagreement between models reveals something about the structure of your customer journeys: how much path-dependency exists, how distributed the influence is, how important endpoints vs. midpoints are.
3. Focus on Incrementality
The closest we can get to "true" causation is incrementality testing: controlled experiments where you turn marketing on and off for matched populations and measure the difference.
Incrementality testing doesn't rely on attribution models. It directly measures: what revenue exists with this channel vs. without it?
This is expensive and complex to run at scale. But for major channels and big decisions, it's the gold standard. It cuts through model dependency by actually observing counterfactuals (in different populations).
4. Embrace Attribution as Storytelling
Attribution is a narrative. It tells a story about how marketing influenced conversion. Like all narratives, it simplifies, emphasizes certain elements, and interprets ambiguous evidence.
This isn't bad—narratives are how humans make sense of complex systems. But recognize attribution reports as constructed stories, not objective facts.
The best attribution practice combines quantitative models with qualitative understanding. Numbers tell you what the model outputs. Qualitative insight tells you why that might or might not reflect reality.
5. Measure What You Can Control
Attribution is most useful when it informs action. And action is about the future, not the past.
Instead of asking "what caused this conversion?", ask "what can we change to cause more conversions?"
This reframes attribution from historical accounting to forward-looking experimentation. The past is ambiguous. The future is actionable.
The Deeper Point
I've been using physics as an analogy, but the deeper point is epistemological: what we know depends on how we know it.
Attribution claims to answer: "What caused this conversion?" But the honest answer is: "What does our model, applied to our data, with our assumptions, tell us about influence patterns that we choose to call causation?"
This isn't skepticism—it's precision. We can know a great deal about influence patterns. We can make excellent decisions based on attribution data. But we should hold those conclusions with appropriate humility.
The customer journey is a complex system. Complex systems resist simple causal explanations. Our models are useful compressions, not perfect mirrors.
Attribution and Free Will
Here's a philosophical tangent that might illuminate the issue.
In debates about free will, there's a similar observer-dependency. From a neuroscientific perspective, decisions are caused by brain states, which are caused by prior states, going back to the big bang. There's no "free" agent—just physics.
From a first-person perspective, you deliberate, weigh options, and choose. The decision feels genuinely open until you make it.
Both perspectives are "true" in their domains. Neither is complete. The neuroscientific view misses something about agency. The first-person view misses something about determinism.
Attribution has a similar structure. From an external modeling perspective, we can trace touchpoints and assign credit. From the customer's internal perspective, they made a decision based on their judgment, not because a webinar "caused" them.
The external model is useful for us. But it's not the full story. It's one perspective on a phenomenon that exceeds any single perspective.
Practical Attribution Architecture
Given all this philosophy, here's a practical architecture for attribution:
Layer 1: Event Tracking
Track everything. Every touchpoint, every engagement, every signal. Don't pre-filter based on what you think matters.
This is the "wave function"—the full state of influences before any attribution model collapses it.
Layer 2: Multiple Attribution Models
Run first-touch, last-touch, linear, time-decay, and position-based in parallel. Store all of them.
This gives you multiple "measurements"—each model's perspective on the same underlying data.
Layer 3: Decision-Specific Views
For each decision context, surface the relevant model:
- Budget optimization → last-touch or time-decay
- Brand investment → first-touch
- Journey design → linear or multi-touch
Match the measurement to the question.
Layer 4: Incrementality Validation
For major channels and campaigns, run incrementality tests. Compare model-based attribution to experimental results.
This calibrates your models against something closer to causal ground truth.
Layer 5: Narrative Integration
Combine quantitative attribution with qualitative insight. Talk to customers. Understand the story behind the journey.
The numbers are the skeleton. The narrative is the flesh.
Conclusion: Embracing Observer-Dependence
Attribution is observer-dependent. The "cause" of a conversion isn't an objective fact waiting to be discovered—it's a construction that depends on the model, the moment, and the question.
This might feel destabilizing. We want firm answers. We want to know that webinars drove pipeline or that SEO is overrated. Observer-dependence seems to dissolve these certainties.
But actually, it clarifies them. Once you accept that attribution is constructed, you can be intentional about the construction. You choose models that serve your decisions. You recognize the limitations. You triangulate across perspectives. You validate with experiments.
The observer-dependent view is more honest—and therefore more useful—than the naive view that "true" attribution exists if we just find the right model.
Marketing is not physics. Causation is not observation. Attribution is not measurement.
And that's okay. We can work with observer-dependent knowledge. We do it all the time—in medicine, in economics, in everyday life. The key is knowing that's what we're doing.
This essay is part of the CRM Framework series. For implementation in software, see Oblio. For strategic consulting, see Hire Timothy Solomon.
Related Reading: