Back to BlogEducation

What Is Incrementality Testing and Why It Beats A/B Testing

Incrementality testing measures whether your ads cause conversions or just take credit for them. Here's why it beats standard A/B testing for ad measurement.

Go Funnel Team7 min read

A/B testing tells you which ad is better. Incrementality testing tells you if ads work at all.

Most marketers understand A/B testing: show version A to one group, version B to another, measure which performs better. It's useful for optimizing creative, landing pages, and offers. But it answers a fundamentally limited question -- which option is better -- without answering the question that actually matters: did this advertising cause these conversions, or would they have happened anyway?

That's what incrementality testing answers. And the results frequently surprise people.

A large DTC brand running $3M/month on Meta ran their first incrementality test in 2024. Their platform-reported ROAS was 5.2x. The incrementality test showed the true incremental ROAS was 1.8x. Roughly 65% of their "attributed" conversions would have happened without the ads.

How incrementality testing works

The concept is straightforward: divide your audience into two groups. Show ads to one group (the treatment group). Show no ads -- or a placebo ad -- to the other group (the holdout group). Compare conversion rates between the groups.

The difference in conversions between the treatment and holdout groups represents the incremental lift -- the conversions that were actually caused by your advertising.

The math behind incrementality

Incremental lift = (Treatment conversion rate - Holdout conversion rate) / Holdout conversion rate

If your treatment group converts at 4.2% and your holdout group converts at 3.1%, your incremental lift is:

(4.2% - 3.1%) / 3.1% = 35.5%

This means your ads are causing a 35.5% increase in conversions above what would have happened organically. The remaining 3.1% baseline conversion rate represents customers who would have bought regardless of seeing your ad.

For every 100 conversions your ad platform claims, only about 26 were truly incremental. The other 74 would have happened anyway. This is a common finding, and it's why incrementality testing matters.

Incrementality vs A/B testing: the key differences

A/B testing compares two variants of something (ad creative, landing page, email subject line) where both groups are exposed to marketing. It measures relative performance. It cannot tell you whether either variant is actually driving conversions that wouldn't have happened without it.

Incrementality testing compares an exposed group to an unexposed group. It measures absolute impact. It answers: "What is the causal contribution of this marketing activity?"

Here's where it gets uncomfortable for agencies: A/B testing always makes your marketing look productive. You're comparing Ad A (which drove conversions) to Ad B (which also drove conversions). Both look good. Incrementality testing might reveal that neither ad was driving meaningful incremental conversions -- the customers were going to buy anyway.

Why A/B testing misses the point for ad measurement

A/B testing has three blind spots that incrementality testing addresses:

1. Selection bias. A/B tests within an ad platform only test people who are already in your targeting audiences. These audiences are pre-selected by the platform's algorithm to be likely converters. You're measuring the difference between two creatives among people who were already predisposed to buy -- not measuring whether the entire advertising effort is worthwhile.

2. No baseline measurement. Without a holdout group that sees zero ads, you have no way to know the organic conversion rate. Your "winning" ad variant might have a 4% conversion rate, but if the organic rate is 3.8%, you're spending significant budget to drive a 0.2 percentage point lift.

3. Cross-channel contamination. A/B tests on one platform can't account for the effect of your other marketing channels. A customer might see your Meta ad (variant A), your Google ad, your email campaign, and your TikTok ad. The Meta A/B test attributes the conversion to variant A, ignoring that the other channels may have been the actual drivers.

When incrementality testing provides the most value

High-spend channels you've never questioned

If you've been spending $50K+/month on a channel for over a year without ever measuring its incremental impact, that's where incrementality testing pays off fastest. Agencies that run incrementality tests on "established" channels find that 30-40% of those channels deliver less than half the value that platform attribution claims.

Retargeting campaigns

Retargeting is the channel most likely to over-claim conversions. These campaigns target people who already visited your site -- people with demonstrated purchase intent. An incrementality test on retargeting frequently shows incremental ROAS of 0.5-1.5x versus platform-reported ROAS of 8-15x.

One agency ran holdout tests on retargeting across 12 ecommerce clients. Average platform-reported ROAS was 11.3x. Average incremental ROAS was 1.2x. Seven of the twelve clients would have been better off reducing retargeting spend and reallocating to prospecting.

Brand campaigns on paid search

Bidding on your own brand name in Google Ads is another area where incrementality testing reveals uncomfortable truths. Studies from eBay and others have shown that 80-90% of branded search clicks would have gone to the organic listing anyway. You're paying for traffic you'd get for free.

Before major budget increases

If your CEO wants to scale ad spend from $200K to $500K per month, incrementality testing tells you whether the current spend is actually working before you pour more money in. Scaling a channel with low incrementality just means wasting money faster.

How to set up your first incrementality test

Step 1: Choose one channel or campaign to test. Don't try to test everything at once. Pick the channel where you have the most uncertainty about true impact.

Step 2: Define your holdout. For digital channels, platforms like Meta offer built-in conversion lift studies that create randomized holdout groups. For other channels, geographic holdouts work well -- pause advertising in select markets and compare to markets where you continue spending.

Step 3: Set your sample size and duration. You need enough conversions in both groups to reach statistical significance. For most ecommerce brands spending $50K+/month on a channel, a 2-4 week test with a 10-15% holdout provides reliable results.

Step 4: Measure the right outcome. Track actual purchases or revenue, not proxy metrics like clicks or add-to-carts. Incrementality should be measured on the metric that matters to your business.

Step 5: Calculate incremental CPA and ROAS. Divide your spend by only the incremental conversions (not total conversions) to get your true cost per acquisition and return on ad spend.

What to do with incrementality results

The test gives you a number. Here's how to act on it:

  • Incremental ROAS above 3x: Scale with confidence. This channel is genuinely driving profitable conversions.
  • Incremental ROAS between 1-3x: Optimize before scaling. The channel works, but there's likely waste to cut.
  • Incremental ROAS below 1x: Reduce spend or restructure. You're paying more for these conversions than they're worth.
  • Zero or negative lift: Pause the channel. Your ads aren't causing any conversions that wouldn't have happened organically.

For agencies, incrementality testing is the most powerful tool for proving your value to clients. When you can show that your media buying generated a measurable, causal lift in conversions, you're having a fundamentally different conversation than "look at the platform ROAS."

Frequently Asked Questions

How much does incrementality testing cost?

The direct cost depends on the method. Platform-native tools like Meta's Conversion Lift are free to use but require minimum spend thresholds (typically $5K-$10K over the test period). Geo-based tests cost you the revenue you forgo in holdout markets -- typically 10-15% of your geographic footprint for 2-4 weeks. Third-party incrementality platforms charge $2K-$10K per test. The ROI is almost always positive: brands that run incrementality tests reallocate an average of 22% of their budget, eliminating waste that far exceeds the test cost.

Can small brands run incrementality tests?

Yes, but the methodology needs to be adapted. Small brands with fewer than 500 monthly conversions struggle to reach statistical significance with traditional holdout tests. Alternatives include time-based holdouts (pause ads for one week, compare to the previous week), geo-based tests in smaller markets, or using matched market pairs. The key constraint is sample size -- you need enough conversions in both treatment and holdout groups for the results to be meaningful. Brands with at least 200 monthly conversions can run viable incrementality tests with a 4-week duration.

How often should you run incrementality tests?

At minimum, test each major channel once per year. Incrementality changes over time as your brand awareness grows, competition shifts, and platform algorithms evolve. A channel that was highly incremental last year may have declining incrementality as your organic baseline increases. High-spend channels (above $100K/month) should be tested quarterly. When you make major changes to targeting, creative strategy, or funnel structure, re-test within 60 days to see if incrementality improved.


Go Funnel uses server-side tracking and multi-touch attribution to show you which ads actually drive revenue. Book a call to see your real numbers.

Want to see your real ROAS?

Connect your ad accounts in 15 minutes and get attribution data you can actually trust.

Book a Call

Related Articles