- Circana
- 4 hours ago
- 6 min read
For years, marketers have collected data and run attribution models to gauge the effectiveness of their programs across channels. Although these traditional models help validate initiatives to company leaders and shape future campaigns, lingering doubts remain. For example, it may not be clear if ads reached loyal customers who were going to buy that product anyway or if the campaign really moved the needle on future opportunities.
For market insights managers, brand managers, and measurement leaders at CPG and retail brands, attribution-based measurement alone limits the ability to understand true performance. Metrics like total sales figures may look great on paper, but they do not always reveal the actual reasons behind a purchase, which is crucial for repeat purchases and long-term loyalty.
To make informed decisions about marketing and advertising investments, a more comprehensive approach is needed to pinpoint exactly where the next investment should be allocated. This method involves moving beyond traditional attribution to focus on incrementality attribution.

Where Attribution Leaves Off and Incrementality Picks Up
What is the key difference between the two models? Attribution tells you which touchpoints were present when a conversion happened. Incremental attribution tells you which of those touchpoints actually caused it.
Traditional attribution models that rely on last-touch models or that spread credit across multiple touchpoints captures total sales associated with those touchpoints but doesn’t prove what led to those sales. Moreover, data privacy laws have disrupted this traditional model, making it increasingly difficult to connect specific creative assets directly to consumers.
When incremental lift is layered in, marketers can better gauge the impact of a campaign.
Incremental attribution isolates the specific effect of an ad on consumer behavior, comparing a test group that saw the ad against a control group that did not.
This model offers key benefits:
Boosting budget efficiency: CPGs and retailers can isolate incremental sales to ensure that every dollar drives growth. Incrementality attribution supports scalable experiments, reduces launch risks, and optimizes resource allocation.
Proving real value to leadership: Today’s senior leaders and finance teams expect clear, defensible evidence of marketing ROI. Presenting incremental sales data provides a transparent metric, enabling organizations to attribute revenue specifically to marketing efforts.
Leveling the playing field: While large advertisers have the resources to demand incremental measurement, this approach is highly valuable for mid-sized and smaller brands, too. Incrementality testing democratizes measurement, allowing brands of all sizes to validate their decisions and prove their worth.

How Circana Measures Incrementality in Physical Retail
A/B testing delivers incremental attribution, but other tools can enhance that form of testing for the best results and returns. Controlled holdout methodologies and synthetic matched market construction are now available through Circana’s Liquid Testing™ solution to reveal the net-new outcomes that occurred solely because of the marketing intervention, without requiring dark markets or held-back sales. This enables brands to aim their budgets at buyers who contribute to genuine growth.
In the physical retail environment, a consumer might walk into a store and see a promotional video on a grocery TV network, hear an in-store radio ad, or walk past a digital endcap. These broadcasts go out to everyone in the store, and a brand can’t specifically target or identify the individual who saw the ad.
Although non-addressable in-store media can’t be connected to a specific consumer device, marketers can use Liquid Testing to evaluate the broad causal effects of a campaign without collecting personal consumer data. This privacy-safe liquid testing methodology analyzes the difference in sales velocity across test-and-control environments, giving businesses the opportunity to accurately measure the booming in-store retail media space while remaining fully compliant with modern privacy standards.

The iROAS Standard and Why It Holds Up in the Board Room
As more advertisers lean into causal measurement, Incremental Return on Ad Spend (iROAS) has emerged as the gold standard. iROAS represents the precise revenue generated for every incremental dollar invested and provides a definitive metric valued by leadership.
The adoption of iROAS does not necessitate abandoning traditional ROAS entirely, and the two metrics serve as complementary tools. Traditional ROAS is useful as a baseline directional indicator, especially when incremental data is temporarily unavailable or when assessing highly addressable, lower-funnel tactics. It is better to have some data rather than no data at all.
That said, ROAS is ultimately less effective than iROAS for strategic budget decisions. When practitioners run Liquid Testing studies, they often see a clear gap between attributed ROAS and iROAS. Attributed ROAS might suggest a campaign is wildly successful, while the iROAS reveals a more modest, yet highly accurate, incremental lift. A CFO presented with both numbers has a more complete and justifiable story than one presented with attributed ROAS alone.

Three Things Marketers Get Wrong About Incrementality Attribution
As incrementality attribution gains ground, a few myths might hold marketers back. Here are three of the most common misunderstandings and why they no longer fit the way modern testing works.
“You have to hold back sales to run an incrementality test”
This is one of the biggest misconceptions, especially among teams that associate testing with sacrifice. The old concern is straightforward: if you want to measure impact cleanly, you must withhold a tactic from part of the market, accept lost revenue in the short term, and treat the whole exercise as a tradeoff between learning and selling.
Rather than requiring a crude, one-to-one holdout that may distort performance or create unnecessary business risk, synthetic controls that are built into Liquid Testing create a more accurate comparison group from real-world store data. In practice, that means marketers can estimate what would have happened without the change, then compare it to what actually happened, without disrupting the business in a heavy-handed way.
This matters because most retail conditions are messy. Stores vary by traffic, assortment, pricing history, competitive intensity, weather, and local demand. If you simply compare one exposed group with one unexposed group, you may confuse external noise for true lift.
“It only works for digital channels”
This misconception comes from the history of attribution itself. For years, the most mature measurement tools centered on digital media because digital environments made exposure easier to track. If an ad could be tied to a device ID, cookie, or logged-in user, marketers had a path to measurement. That made digital look measurable and physical retail seem murky.
But that gap is exactly why incrementality testing has become so important. Liquid Testing is built specifically for physical retail, designed to help brands and retailers understand the sales impact of in-store changes, using store-level POS data, advanced experimentation methods, and AI-powered insights. That includes testing the effect of pricing, promotions, placement, merchandising, new products, and shelf changes before a broader rollout. It is a tool developed for the realities of brick-and-mortar commerce, where many important variables are visible at the store level rather than the user level.
“It’s a one-time study, not an ongoing tool”
Another common mistake is treating incrementality testing like a special project: a team runs one study before a launch, presents the findings, and moves on. That approach may have made sense when testing was slow, expensive, and custom-built each time. But it does not match how modern retail decisions get made.
As a self-serve solution, this tool allows brands and retailers to run unlimited tests with setup and analysis done in minutes. Instead of saving experimentation for a few high-stakes questions, teams can build continuous measurement into normal operations.
That matters because retail conditions are never static. A pricing change that works in one season may underperform in another. A shelf tactic that lifts one category may not translate to a different store format. A promotion may succeed in one region and fail in another. The smarter model is ongoing optimization, where each test informs the next decision.

What a Complete Measurement Stack Looks Like
A maturity framework for media measurement evolves through stages. Each stage builds on the previous one to deliver increasingly precise and actionable insights.
Stage 1: Attribution models as the primary source of truth
At this foundational stage, brands rely heavily on attribution models to guide budget decisions. These models allocate credit for conversions across various touchpoints, offering a basic understanding of campaign performance.
Stage 2: Episodic incrementality studies
In this intermediate stage, brands supplement attribution with incrementality studies that provide a causal check on attribution findings by isolating the specific lift generated by marketing efforts. Circana's A/B testing methodologies and store-level POS data allow marketers to run targeted, privacy-safe experiments that measure the direct impact of in-store changes.
Stage 3: Always-on incrementality infrastructure
The pinnacle of measurement maturity is achieved with an always-on incrementality infrastructure that operates continuously alongside attribution models. This integrated approach ensures that every planning cycle benefits from both credited performance metrics and confirmed lift. Circana's Liquid Testing platform is designed to support this level of sophistication, offering a self-serve, AI-powered environment for running unlimited tests with unmatched speed and precision.





























