Evaluating In-Store Media: Which Approach Is Right For You?
Henry Sherratt, March 22
A quick internet search reveals any number of quotes about the value of learning from the past to improve the future, from Confucius to the late and great Steve Jobs. Whilst they probably didn’t have commerce media on their mind at the time, it’s fair to say that evaluating your media’s performance is no different: post-campaign evaluation is the best way to improve future results, offering an insight into what did and didn’t work.
Data is our bread and butter at Lobster, and so we have a few different ways to gauge the effectiveness of media! Today we’ll be running through a few in-store media evaluation approaches (as well as comparing some of their more significant pros and cons) to give some insight on which one may be right for you.
Test vs. Control
Test vs. Control compares a subset of stores with media versus those without, matching them through several key similarities. These could include variables like store size; region; product, brand, and category sales; store format, and more.
The key benefit of test vs. control methodology is that it allows us to determine the impact of the media more precisely, isolating sales impact from the media alone and controlling for factors that could influence sales such as promotions and regional differences. This results in a much more robust analysis, allowing a brand to get a clearer picture of what did or did not work.
Nonetheless, Test vs. Control is arguably the most complex methodology on this list, requiring sufficiently detailed sales data, a suitable comparison group for our control, and the technical knowledge required to implement statistically sound test vs. control matching. Even we don’t always use Test vs. Control when we don’t have access to the information required. A “true” innovation would be one example of this, where a supplier enters a new category with no affiliated parent brand to draw sales data from.
Comparing Live Media Stores to Non-Media Stores
This evaluation method is like test vs. control but compares the average sales across all stores with the campaign media against all stores without. This gives an impression of how much media stores outperformed non-media stores and is often used to evaluate media for new products without prior data to generate test vs. control pairs.
Live Media vs. Non-Media Stores is a useful methodology if you simply want to know whether the media was associated with increased sales performance against stores without media or are evaluating products with no previous data like NPDs. Compared with test vs. control, it has the additional benefit of generating results without requiring as significant depth of analysis. The key disadvantage is that the results don’t factor in differences between stores, such as size, reducing the
efficacy of the results.
Pre-campaign vs. Live Period
This methodology compares performance during/shortly after activation of in-store media – the live period – versus performance in the period leading up to activation – the pre-period. This is typically used where control stores are unavailable, which may be the case if media is activated in every store in a particular supermarket.
Much like live media vs. non-media, results can be generated with limited data, making it a generally easier analysis method for a brand. If no other factors are relevant, such as time of year or price changes, then pre vs. live can be an effective method of judging the effects of media. However, results are not directly comparable between analyses, and changes in sales may not be able to be
directly attributed to the media alone. Put simply, pre vs. live doesn’t control for any variables that may have impacted sales. Pre vs. live also won’t work well for new products, since sales would naturally increase for reasons that are unrelated to the media. You might therefore want to opt for a methodology that accounts for seasonality, such as…
Comparing This Year to Last Year
If a campaign tends to happen at roughly the same time-period each year, we can compare a campaign to a similar one from the year before to determine the incremental impact of the media. A year-on-year comparison in this style might look at two Summer or Christmas campaigns, for example.
The obvious benefit of this methodology is its simplicity: little analysis is required, as you’ll simply need to compare numbers such as % sales difference and average store sales uplift. Its crudeness is also its key drawback, however. Plenty of external factors might skew results and give an incomplete impression of media performance, like a worldwide pandemic…
There are clearly a few ways that a brand can evaluate media, with the one that’s best for your brand being entirely dependent on the quality of data (and time) available to you. But which do we think is best here at Lobster?
At the end of the day, each method has their merits, but we consider test vs. control to be the gold standard. If you have the necessary data resources, test vs. control will allow you to gain an unsurpassed understanding of your media. This could lead to evidence that you’re smashing your KPIs, or maybe you’re not doing as well as you thought you were. Either way, gathering and collating data like this is what allows a brand to maximise the effectiveness of ad spend. Building up a bank of
robust evaluations and learning what does or doesn’t work for your brand is a reliable way to see improvements to your KPIs.
Need a little help with that? Lobster was founded with the ambition to give our clients the technology and insight to create great commerce media campaigns. Our in-house software can help with everything from planning your next big campaign to evaluating individual media. Get in touch today to find out how we can help you create better commerce media campaigns.