Incrementality Testing for Marketers: Measure the True Impact of Your Campaigns
Learn how to design and run incrementality tests that prove whether your marketing campaigns are actually driving results—or just taking credit for organic conversions.


Incrementality Testing for Marketers: Measure the True Impact of Your Campaigns
Attribution tells you which touchpoints were present before a conversion. Incrementality testing tells you which touchpoints actually caused the conversion. That's a crucial distinction—and ignoring it means you're probably wasting money on campaigns that take credit for sales that would have happened anyway.
The Attribution Problem
Attribution—whether last-click, multi-touch, or data-driven—measures correlation, not causation. A user who clicks a branded search ad and then purchases was attributed to that ad. But would they have purchased anyway by typing your URL directly? Probably. For many branded search campaigns, the answer is overwhelmingly yes.
This is the incrementality problem: some portion of every channel's "attributed" conversions would have happened without the marketing spend. The percentage varies wildly by channel:
- Branded search: Often 60–90% of attributed conversions would happen organically. You're paying for clicks from people who already intended to visit your site.
- Retargeting: Typically 50–70% would have converted without the retargeting ad. You're showing ads to people who were already in a buying mindset.
- Prospecting display: Usually 5–20% incremental lift. Most impressions have little measurable impact, but the ones that work are genuinely incremental.
- Paid social (prospecting): Typically 15–40% incremental, varying enormously by audience and creative quality.
These numbers aren't universal—they depend on your brand, category, and customer base. The only way to know your specific incrementality rates is to test.
How Incrementality Testing Works
The core concept is borrowed from clinical trials: create a test group that sees your marketing and a control group that doesn't, then compare outcomes.
The Basic Framework
- Define your audience. The total population you want to test (e.g., all US adults who match your targeting criteria).
- Randomly split into test and control. The test group sees your ads. The control group is held out—they don't see your ads.
- Run the campaign. Deliver ads only to the test group for a defined period (typically 2–4 weeks).
- Measure the difference. Compare conversion rates between the test and control groups.
- Calculate the lift. Incremental lift = (Test conversion rate - Control conversion rate) / Control conversion rate.
If the test group converts at 2.0% and the control group converts at 1.5%, your incremental lift is 33%. That means 33% of your attributed conversions were truly incremental—the other 67% would have happened without the ads.
Incrementality Testing Methods
Method 1: Platform Conversion Lift Tests
Meta, Google, and TikTok all offer built-in conversion lift testing. The platform handles the randomization—it selects users to hold out and measures the conversion difference.
Pros: Easy to set up, handles randomization and measurement automatically, and integrates directly with your campaign.
Cons: The platform is measuring itself. There's an inherent conflict of interest, even if the methodology is sound. You're also limited to the platform's measurement capabilities and can't customize the test design.
Method 2: Geo-Based Lift Tests
Instead of splitting users, split geographic regions. Run your campaign in some markets (test) and pause it in matched markets (control).
Pros: No user-level tracking required—completely privacy-proof. Works for any channel, including offline (TV, radio, OOH). Measures total business impact, not just digital conversions.
Cons: Requires enough geographic markets to achieve statistical power. Market-level differences (economic conditions, competitive activity) can confound results. Takes longer to reach significance because you're working with fewer data points.
// Example geo-lift test design
const geoLiftTest = {
testMarkets: ['Chicago', 'Houston', 'Phoenix', 'Philadelphia'],
controlMarkets: ['Dallas', 'San Antonio', 'San Diego', 'San Jose'],
matchingCriteria: {
populationSimilarity: 0.9,
historicalConversionCorrelation: 0.85,
baselineRevenueSimilarity: 0.88
},
duration: '4 weeks',
cooldownPeriod: '2 weeks',
primaryMetric: 'total_revenue',
secondaryMetrics: ['new_customers', 'website_visits', 'branded_search_volume']
};Method 3: Ghost Ads / Intent-to-Treat
In a ghost ad test, the control group is exposed to the opportunity for an ad impression, but a placeholder (or competitor's ad) is shown instead. This ensures the test and control groups have identical ad exposure opportunity, isolating the impact of your specific creative.
Pros: Eliminates selection bias from platform targeting algorithms. Measures the impact of your creative specifically, not just the impact of being targeted.
Cons: Technically complex to implement. Requires cooperation from the ad platform or a DSP that supports ghost ad functionality. Limited availability.
Method 4: Pre/Post with Matched Markets
A simpler approach: pause a campaign entirely and compare performance before and after. Use matched markets or time periods as controls.
Pros: Simple to execute. No special tools or platform features required. Anyone can turn off a campaign and measure what happens.
Cons: Many confounding factors (seasonality, competitive activity, organic trends) make it hard to isolate the campaign's impact. Lower confidence than randomized experiments.
Designing a Good Test
Sample Size and Duration
The most common mistake in incrementality testing is running underpowered tests. If your sample is too small or your test period too short, random noise will overwhelm the signal.
Rules of thumb:
- User-level tests: You typically need tens of thousands of conversions across test and control to detect a meaningful lift with statistical confidence.
- Geo-level tests: You need at least 10 test markets and 10 control markets for reasonable statistical power.
- Duration: Run tests for at least 2 weeks. 4 weeks is better for capturing weekly patterns. Include a 1–2 week cooldown period after the test to measure lagged effects.
What to Measure
Your primary metric should be conversions or revenue—the outcome your campaign is designed to influence.
But also track secondary metrics that help explain the results: website visits, branded search volume, new vs. returning customer conversions, and average order value. These help you understand how the campaign is driving (or not driving) results.
Testing Cadence
Incrementality results change over time. A campaign that's 30% incremental today might be 15% incremental after a year as your audience becomes saturated. Test regularly:
- Quarterly: Test your highest-spend channels and campaigns.
- Semi-annually: Test medium-spend channels and any new channels you've added.
- After major changes: Retest whenever you significantly change creative, targeting, or budget levels.
Turning Results into Action
Adjusting Attribution Credit
Use incrementality results to calibrate your attribution model. If your branded search campaign shows 20% incrementality, apply a 0.20 multiplier to its attributed revenue. This gives you an "incrementality-adjusted" view of channel performance that's closer to truth.
Budget Reallocation
Channels with high incrementality deserve more budget. Channels with low incrementality deserve scrutiny—not necessarily elimination (they may still serve a protective or branding function), but they shouldn't get credit for driving growth.
Identifying Waste
The most valuable output of incrementality testing is finding spend that generates zero incremental value. Common culprits: branded search on your own brand name, retargeting users who are already in a purchase flow, and frequency-capped impressions beyond the point of diminishing returns.
How Audiencelab Supports Incrementality
Audiencelab provides the data infrastructure that makes incrementality testing practical:
- Clean audience segmentation for building randomized test and control groups from your first-party data.
- Cross-channel conversion measurement that captures outcomes regardless of which channel drove them.
- Incrementality-adjusted attribution that applies test results to your ongoing attribution reporting.
- Holdout audience management for maintaining persistent control groups across campaigns.
Ready to find out which campaigns are actually working? Talk to our team about setting up incrementality testing.