stephen-dawson-qwtCeJ5cLYs-unsplash

Incrementality: The correct way of market measurement

one of the business side effects of the pandemic is that it has shined a very bright light on marketing budgets. This is a good thing in any circumstance; your marketing should be earning its keep. And this scrutiny is particularly beneficial in times when companies might not be doing well financially.

A focus on budgets means a focus on conversion. From there, it is a quick leap into an in-depth discussion of attribution and incrementality. And while those two often get lumped together, they’re not at all the same thing.

Attribution is simply the science (or, too often, the art) of distributing credit for conversions across your various marketing channels. When we do attribution modeling in tools like Adobe or Google Analytics, we are essentially saying that if there were four interactions with our owned, earned, and paid media channels before a last-click conversion from paid search, then the credit for that conversion should be distributed to all the interactions.

The question incrementality asks is “How many of those conversions would have happened anyway, without any advertising spend?”

Incrementality is incredibly hard to measure. Just think of all the data you need to collect to figure out the answer. It’s also incredibly complex to measure — in part because when senior leaders ask the incrementality question, they often mean something different.

To help you get a better handle on it, let me share the three types of marketing incrementality and how each type can be used.

Channel-silo incrementality

Let’s say you spend a lot of money on paid-search advertising. Excellent!

An incrementality-curious executive might ask you: “How many of the conversions we received from paid search would we have gotten anyway because of our organic search listings?”

This is what I call channel-silo incrementality.

You can, for example, conduct conversion lift studies for your direct-response advertising. You can measure channel-silo incrementality delivered for your brand advertising as well. (Here are detailed instructions on exactly how to do that on Google’s Display and YouTube advertising platforms.)

Another option is randomized, controlled experiments. These are user-level experiments (as opposed to geo-level tests, like match-market tests) that use causal methodology to determine whether an ad actually changed consumer behavior. They randomly divide target users into two groups: We show ads to one and withhold ads from the other. This is the easy comparison of users who were and would have been exposed to the ads.

Some would say channel-silo incrementality is not true incrementality, and they’re right. But it is still helpful for achieving a tactical understanding of how to optimize your advertising in an individual channel.

You can measure channel-silo incrementality delivered for your brand advertising.

If you are spending $2 million on paid search each month and it has 16% incrementality, the first thing you do is identify the keywords driving that 16%. The second thing you do is identify other opportunities where the organic search is weak. Then you pour budget into paid search for those keywords, because it’ll drive incremental conversions.

Cross-stack incrementality

Some companies you advertise with will offer you multiple options. For example, you can advertise on Google Search, Google Display, and YouTube.

An incrementality-curious executive might ask you: “What is the incrementality across my advertising on Google-owned properties?”

I call this cross-stack incrementality.

Example of cross-stack incrementality

  • Google stack incrementality
  • Facebook stack incrementality
  • Other channel silos
Think with Google

Is all that spending delivering incremental conversions? The answer is most definitely “no.”

You might have gotten some of the same conversions from YouTube as from Google Search. You might have gotten all of the same conversions from Search that you got from Display. And so on and so forth.

Due to the complexity of measuring cross-stack behavior, most ad stacks don’t offer a way to measure cross-stack incrementality.

Running clean, matched-market tests, in which you compare the behavior of users in a single control region with the behavior of users in a single test region, is a good way to measure cross-stack incrementality. Another route, if you spend a whole lot on any ad stack, is to use advanced modeling like conversion modeling.

Cross-stack incrementality helps you optimize on-stack budget allocations as well as on-stack optimizations.

Marketing-portfolio incrementality

Measuring across all activity is the hardest part of marketing analytics.

An incrementality-curious executive might ask you: “What is the incrementality across all the marketing activity I spent money on?”

I call it marketing-portfolio incrementality.

In other words, what is the true incrementality of the money spent on Google, YouTube, Display, Facebook, cinema, print, television, channel marketing, and promotions?

How many sales did all that money really deliver? You can ask the same question for a brand metric, say unaided awareness or consideration. How much of the brand lift in metric X would not have occurred without the ad spend?

When measured correctly, the impact of incrementality on your marketing decisions can be transformative. But measuring it is really, really hard. And it can produce seemingly conflicting findings. One year, those billboards we buy in every city can be entirely useless in an incrementality context. Another year, billboards deliver so much incremental brand lift, we should shut down social-media ads. You get the idea.

Marketing-portfolio incrementality, like cross-stack incrementality, can be measured with matched-market tests.

Regardless of your business or budget size, you need to understand the concept of incrementality.

Marketers running large campaigns across multiple channels often use marketing mix modeling (MMM). When done right, it’s good for evaluating media performance and optimizing budgets across media types for long-term budgeting decisions. But I have my own issues with MMM. First, it typically understands each channel in a silo and, hence, effectively identifies channel-silo incrementality. Second, as practiced in many companies, MMM incorporates human bias into the model. These models also require an enormous amount of spend to get a decent signal and take a long time to produce.

This is the approach I prefer.

  • Use multiple machine learning–based algorithms to first understand the underlying relationships inside the data, removing human bias.
  • Construct an influence graph across the entire portfolio, removing silo analysis.
  • Understand conditional dependencies across all random variables to identify coefficients, and do so over smaller datasets.

This MMM method is very scalable, smart, and allows you to do both backward-looking analysis (how did we do) and forward-looking predictions (how much should we spend based on diminishing return curves).

Regardless of your business or budget size, you need to understand the concept of incrementality. I mean, really get it. In addition to the benefits to your marketing, it just might get you a raise and a promotion.

Add a Comment

Your email address will not be published. Required fields are marked *