Incrementality vs Attribution: Which Gives Better Answers
Incrementality measures causal impact. Attribution tracks customer journeys. Here's when to use each and how they complement one another.
Two different tools for two different questions
Attribution answers: "Which touchpoints did the customer interact with before converting?"
Incrementality answers: "Would the customer have converted without this touchpoint?"
These sound similar but lead to very different conclusions. Attribution tells you a story about the customer journey. Incrementality tells you whether your advertising actually changed the outcome. One is narrative. The other is causal.
The marketing industry spent 15 years refining attribution models -- last-touch, first-touch, linear, time-decay, data-driven, algorithmic. These models distribute credit across touchpoints, and they're useful for understanding customer behavior. But they share a fundamental flaw: they assume every touchpoint contributed to the conversion. They cannot tell you which touchpoints were truly necessary.
That's what incrementality testing does. And the results often contradict what attribution models report.
How attribution works -- and where it breaks
The mechanics
Attribution models track user interactions (clicks, views, site visits) and assign credit for conversions to those interactions based on rules or algorithms.
Last-touch attribution gives 100% credit to the final interaction before conversion. Simple, but it ignores everything that came before.
First-touch attribution gives 100% credit to the first known interaction. It emphasizes discovery but ignores the nurturing process.
Multi-touch attribution (MTA) distributes credit across all touchpoints using rules (linear, time-decay, position-based) or algorithms (data-driven models that weight touchpoints based on statistical patterns).
Where attribution fails
It can't identify causation. A customer who clicks five ads before purchasing doesn't mean all five ads were necessary. Some may have been irrelevant interruptions. Attribution models assign credit to all of them anyway.
It depends on tracking. Attribution is only as good as the data it captures. With iOS 14+ opt-outs, cookie deprecation, and cross-device blind spots, MTA models now work with 40-60% of actual user journeys. The rest is modeled -- estimated based on patterns from the visible data.
It's inherently self-referential. Platform-based attribution (Meta's, Google's) only considers touchpoints within that platform. Cross-platform attribution tools are better but still miss offline channels, word of mouth, and untracked interactions.
It over-credits lower-funnel channels. Attribution models naturally favor touchpoints closest to conversion. Retargeting ads, branded search, and email campaigns sit at the bottom of the funnel and capture a disproportionate share of credit -- even though upper-funnel activities may have been what created the demand in the first place.
A 2024 analysis of 150 ecommerce brands found that attribution models over-credited retargeting by an average of 6.2x compared to incrementality measurements. Branded search was over-credited by 4.8x. Prospecting campaigns were under-credited by 1.4x.
How incrementality testing works -- and its limits
The mechanics
Incrementality testing creates two groups: one that sees your marketing (treatment) and one that doesn't (control/holdout). The difference in conversion rates between groups represents the true causal impact.
This is the gold standard for measuring whether your advertising changed behavior. No modeling. No assumptions about touchpoint contribution. Just: did the people who saw ads convert at a higher rate than those who didn't?
Where incrementality falls short
It's point-in-time. An incrementality test tells you the lift from a specific campaign during a specific period. It doesn't tell you what's happening right now unless you're running a test right now. Attribution models provide continuous measurement.
It requires sacrifice. Holdout groups don't see your ads, which means lost revenue during the test. For brands spending $1M+/month, a 10% holdout over 4 weeks means forgoing roughly $100K in potential revenue. The measurement value usually exceeds this cost, but it's not free.
It can't guide daily optimization. Incrementality tests take weeks to produce results. If you need to decide whether to shift $5K from Campaign A to Campaign B today, incrementality data from last month's test helps directionally, but it can't respond to real-time performance changes.
It measures the average effect, not individual contributions. An incrementality test on Meta tells you whether Meta ads, in aggregate, drive incremental conversions. It doesn't tell you which specific ad, audience, or placement within Meta is most incremental. You need attribution data for that granularity.
When to use each approach
Use attribution for:
- Day-to-day campaign optimization. Which creatives are performing? Which audiences are converting? Which placements drive the best CPA? Attribution data (even imperfect attribution data) answers these tactical questions in near-real-time.
- Customer journey analysis. Understanding the typical path from awareness to purchase. How many touchpoints do customers interact with? What's the average time from first touch to conversion?
- Budget allocation within a channel. When you know a channel is working (validated by incrementality), use attribution to optimize how you spend within that channel.
Use incrementality for:
- Validating channel effectiveness. Before committing budget to a channel, test whether it actually drives incremental conversions. This is especially important for retargeting, branded search, and any channel you've been running for 6+ months without questioning.
- Budget allocation across channels. Move budget from channels with low incrementality to channels with high incrementality. A quarterly incrementality testing cadence provides the data for these decisions.
- Justifying ad spend to leadership. CFOs and boards don't care about attribution models. They care about causal impact. Incrementality tests provide the kind of evidence that finance teams trust.
- Measuring unmeasurable channels. Podcasts, TV, billboards, influencer partnerships, and PR can't be tracked with pixels. Geo-based incrementality tests measure their impact without requiring any tracking infrastructure.
How to combine both approaches
The smartest marketers use both, together. Here's the framework:
Step 1: Establish causal baselines with incrementality
Run incrementality tests on each major channel once per quarter. This gives you the true incremental CPA and ROAS for each channel -- your ground truth.
Step 2: Calibrate your attribution model
Compare your attribution model's credit allocation to your incrementality results. If your model says Meta drives 40% of conversions but incrementality tests show Meta's incremental contribution is 25%, apply a 0.625 calibration factor (25/40) to Meta's attributed conversions.
Do this for every channel you've tested. Your calibrated attribution model now approximates incremental value in real-time.
Step 3: Use calibrated attribution for daily decisions
With calibrated weights, your attribution data becomes a reliable proxy for incremental value. Optimize campaigns, shift budgets, and evaluate creative using the calibrated model.
Step 4: Re-test quarterly and recalibrate
Incrementality changes as your brand grows, competition shifts, and platforms evolve. Quarterly tests keep your calibration factors current.
This framework gives you the best of both worlds: the causal rigor of incrementality testing and the real-time granularity of attribution modeling. Neither approach alone is sufficient. Together, they form a measurement system that's both accurate and actionable.
The cost of getting this wrong
Brands that rely solely on attribution typically over-invest in lower-funnel channels by 25-40% because those channels appear to have the best ROAS. They under-invest in prospecting and brand building, which creates a slowly shrinking top of funnel that eventually strangles growth.
Brands that run incrementality tests but ignore attribution data make better channel-level decisions but struggle with campaign-level optimization. They know which channels work but can't efficiently allocate budget within those channels.
The gap between these approaches represents real money. For a brand spending $500K/month on paid media, a 20% improvement in allocation efficiency -- achievable by combining incrementality and attribution -- translates to $100K/month in either saved spend or additional revenue from better-placed dollars.
Frequently Asked Questions
Can attribution models ever show causal impact?
Standard attribution models -- even sophisticated algorithmic ones -- show correlation, not causation. However, some advanced approaches bridge the gap. Shapley value models borrow from game theory to estimate each channel's marginal contribution. Markov chain models simulate removing individual channels and estimate the conversion impact. These approaches are closer to causal measurement than simple rule-based models, but they still rely on observed data and can't account for customers who would have converted without any marketing. Only controlled experiments (holdout tests) provide true causal evidence.
How do I explain the difference between attribution and incrementality to my CEO?
Use an analogy: Attribution is like asking who touched the basketball last before it went through the hoop and giving them credit for the point. Incrementality testing is asking whether the team scores more points when a specific player is on the court versus on the bench. The first approach describes what happened. The second measures whether that player actually makes a difference. Your CEO cares about whether the marketing budget is making a difference, which is an incrementality question.
Is incrementality testing replacing attribution?
No. The industry is moving toward combining both approaches, not replacing one with the other. Google, Meta, and major measurement platforms are all building frameworks that layer incrementality data on top of attribution models. The trend is toward "triangulation" -- using multiple measurement methods (attribution, incrementality, media mix modeling) and cross-referencing their results. Each method has blind spots that the others cover. Brands that rely on a single measurement approach are the ones making the most expensive mistakes.
Go Funnel uses server-side tracking and multi-touch attribution to show you which ads actually drive revenue. Book a call to see your real numbers.
Want to see your real ROAS?
Connect your ad accounts in 15 minutes and get attribution data you can actually trust.
Book a Call