Geo Testing vs Conversion Lift Studies: A Comparison
Geo testing and conversion lift studies both measure incrementality, but they work differently. Here's when to use each and how to interpret the results.
Two paths to the same question: do your ads actually work?
Both geo testing and conversion lift studies aim to measure incrementality -- the revenue your ads generate that wouldn't have happened otherwise. But they use fundamentally different mechanisms, produce different types of insights, and come with different trade-offs.
For CMOs who need to prove marketing ROI to the board, understanding these differences isn't academic. It determines whether your incrementality measurement is trusted, accurate, and actionable.
How each method works
Conversion lift studies
Conversion lift studies are run within an ad platform's infrastructure. The platform randomly divides users into two groups:
- Exposed group: Users who are eligible to see your ads
- Holdout group: Users who are excluded from seeing your ads
After the test period, the platform compares conversion rates between the two groups. The difference is the lift -- the incremental conversions attributed to ad exposure.
Key characteristic: Randomization happens at the user level, not the geographic level. The platform decides who sees ads and who doesn't, regardless of location.
Meta, Google, and TikTok all offer their own versions of conversion lift studies, though availability varies by spend level and account status.
Geo testing
Geo testing divides geographic regions (not users) into test and control groups:
- Test regions: Ads run normally (or with changed spend)
- Control regions: Ads are paused or reduced
After the test period, you compare business outcomes (revenue, conversions, traffic) between the two groups. The difference, adjusted for historical baselines, is the incremental impact.
Key characteristic: Randomization happens at the geographic level. All users in a region get the same treatment.
Side-by-side comparison
| Dimension | Conversion Lift Study | Geo Testing | |-----------|----------------------|-------------| | Randomization unit | Individual users | Geographic regions | | Who runs it | Ad platform (Meta, Google) | You (or your agency) | | Data source | Platform's own data | Your backend data (Shopify, CRM) | | Channels measured | Single platform only | Any channel or combination | | Setup complexity | Low (platform handles it) | Medium-high (manual design) | | Minimum spend | Platform-dependent (often $50K+) | Depends on regional volume | | Duration | 2-4 weeks typical | 3-6 weeks typical | | Cross-channel effects | Not captured | Captured (e.g., Meta's impact on Google) | | Potential bias | Platform controls measurement | You control measurement | | Statistical rigor | User-level randomization (stronger) | Region-level (less granular) |
Strengths of conversion lift studies
User-level randomization
The biggest advantage of lift studies is the randomization mechanism. By randomly assigning individual users to exposed and holdout groups, the platform creates very similar comparison groups. Every demographic, behavior, and location characteristic is balanced between groups -- at least in theory.
This is a stronger experimental design than geographic randomization, where regional differences can introduce noise.
Easy setup
Meta's conversion lift study takes about 15 minutes to configure. You set your test parameters, the platform handles randomization, delivery, and measurement. No geographic analysis, no DMA matching, no manual data collection.
For CMOs who need incrementality data but don't have a measurement team, this ease of use is a significant advantage.
Granular results
Because lift studies work at the user level, they can provide insights by audience segment, creative, and placement. "Your prospecting campaigns to 25-34-year-olds generate 22% incremental lift" is more actionable than "Meta drives 18% incremental lift in the Southeast."
Weaknesses of conversion lift studies
Platform bias
Here's the uncomfortable truth: when Meta measures Meta's effectiveness, there's an inherent conflict of interest. Meta's revenue depends on advertisers believing their ads work. While there's no evidence of deliberate manipulation, the structural incentive exists.
The measurement uses Meta's own conversion data, which includes the same modeling and estimation that inflates regular reporting. The holdout group may still see organic Facebook/Instagram content, which isn't fully controlled for.
Single-platform measurement
A Meta lift study only measures Meta's incremental impact. It can't tell you how Meta interacts with Google, TikTok, or other channels. If Meta awareness campaigns drive Google search conversions, the lift study won't capture that cross-channel effect -- potentially understating Meta's true value.
Availability and cost
Not all accounts have access to conversion lift studies. Meta typically requires significant monthly spend (often $50K+), and study availability can change without notice. Google's version has similar restrictions. This limits the tool to larger advertisers.
Holdout leakage
The platform decides who's in the holdout group, but it can't prevent those users from seeing your brand through other channels, word of mouth, or organic social. If holdout users convert through these paths, the measured lift understates the true incremental impact.
Strengths of geo testing
Platform independence
You control the entire experiment: design, execution, data collection, and analysis. Your backend data (actual orders, actual revenue) is the measurement source, not platform-reported conversions. There's no platform bias in the measurement.
Cross-channel measurement
Geo testing captures the full impact of a channel across all downstream effects. If pausing Meta in control regions also reduces Google search volume and direct traffic, the geo test captures that full revenue impact. This cross-channel measurement is impossible with single-platform lift studies.
Any channel, any strategy
Geo testing works for channels that don't offer lift studies: podcasts, influencer marketing, TV, direct mail. It also works for complex strategies like "What's the incremental impact of increasing Meta spend by 30% while simultaneously launching TikTok?"
Transparency
Every step of a geo test is visible and auditable. The regions, the methodology, the data, and the analysis are all under your control. This transparency makes results more credible to skeptical stakeholders (like a CFO who doesn't trust ad platform self-reporting).
Weaknesses of geo testing
Statistical noise
Geographic regions are noisy. Local events, weather, competition, and economic conditions create variation that doesn't exist in user-level randomization. You need more data (more regions, longer test periods) to achieve the same statistical confidence as a lift study.
Revenue sacrifice
Pausing ads in control regions means forgoing revenue during the test period. For a 4-week test with 30% of regions in the control group, you're sacrificing approximately 30% of those regions' ad-driven revenue. This cost is real and must be weighed against the value of the insight.
Complexity
Designing a good geo test requires statistical expertise: region matching, power analysis, difference-in-differences modeling. Without this expertise, results can be unreliable. Many brands don't have in-house measurement teams and need external help.
Contamination risk
People move between regions. Media crosses borders. A customer in a control region might see an ad while traveling through a test region. National brand-building efforts affect both groups. These contamination effects reduce the clean signal of the test.
When to use each approach
Use conversion lift studies when:
- You need a quick incrementality check on a single platform
- Your spend qualifies for platform lift studies
- You want segment-level insights (audience, creative, placement)
- You don't have a measurement team to design geo tests
- You trust the platform's measurement enough for directional guidance
Use geo testing when:
- You need platform-independent measurement
- You're measuring cross-channel effects
- You're testing channels that don't offer lift studies
- You need results that will withstand CFO-level scrutiny
- You have the statistical capability to design and analyze the test
- You're calibrating or validating your attribution model
Use both when:
- You can afford the investment in both
- You want to cross-validate results (if both methods agree, confidence is high)
- You need platform-specific insights (lift study) and cross-channel impact (geo test)
How to reconcile different results
It's common for conversion lift studies and geo tests to produce different incrementality estimates. When they disagree:
-
Check the measurement scope. Lift studies measure single-platform impact. Geo tests measure total impact including cross-channel effects. The geo test number is usually larger because it captures downstream effects.
-
Check the time period. If the tests ran at different times, market conditions may have changed.
-
Lean toward the geo test for budget decisions. Since geo tests use your backend data and capture cross-channel effects, they're typically more reliable for overall budget allocation. Use lift study results for within-platform optimization (which campaigns, audiences, and creatives to focus on).
FAQ
Can I run a conversion lift study and geo test simultaneously?
Yes, but be careful. If the geo test pauses ads in certain regions while the lift study holds out certain users, you create interactions between the two experiments. Run them sequentially (lift study first for quick directional data, then geo test for validation) or ensure the geographic and user-level holdouts don't overlap.
Are conversion lift study results from Meta trustworthy?
They're more trustworthy than platform-reported ROAS because they use a control group. But they're not fully independent -- Meta controls the experiment and uses its own data. Treat them as useful directional signals, not definitive proof. Cross-validate with geo testing or backend analysis.
How much should I budget for incrementality testing?
For a conversion lift study, the cost is the revenue lost from the holdout group (typically 5-10% of your audience). For a geo test, the cost is the revenue lost from control regions (typically 20-40% of included regions). In both cases, the cost is temporary and the insight is permanent.
Go Funnel uses server-side tracking and multi-touch attribution to show you which ads actually drive revenue. Book a call to see your real numbers.
Want to see your real ROAS?
Connect your ad accounts in 15 minutes and get attribution data you can actually trust.
Book a Call