Trade promotion spending is one of the largest expenses on a P&L and can be up to 25% of consumer packaged goods (CPG) companies’ sales. The most widely used way to analyze trade spending in the industry is to use the trade promotion measures provided by syndicated data providers. But, trade promotion spending continues to become less effective at driving incremental sales, even as spending levels continue to increase. Fundamentally, something is wrong with this picture.
We think it’s time to consider that it’s the measurements themselves that are contributing to the problem.
The Problems with the Measures
As we see it, here are the six big problems with the way trade promotions are analyzed:
Focus on Quantity, Not Quality – The vast majority of syndicated measures focus only on the quantity of trade support (e.g. % ACV with Feature/Display) but not the quality of support (i.e. how much incremental sales are being generated). Quality measures are as important as quantity measures - arguably more so. A better way to capture the impact of promotional quality is by measuring incremental lift per event. This shows you how much extra sales you get each time you promote.
The Quantity ACV Metrics are Incomplete - Quantity measures don't give you coverage of what percent of your portfolio was being promoted, as opposed to just reporting an ACV measure at the brand or brand/segment level. The percent coverage of your portfolio can make a big difference when looking at whether you are having one size or the entire line promoted. To account for these gaps TABS Analytics has created a measure called Average Events, which you can read more about here. This measure aggregates all the promotion points for each item and divides that by Total Distribution Points (TDP’s). This measure provide a better view of whether the entire brand is being promoted during a given period. So now we have two measures for the Promotional Quantity (Average Events) and Quality (Incremental Lift per Event). Companies can now determine if the promotional shortfalls are due to not getting enough support, or if the support they get is not effective.
Accuracy of Display Measures – A long-lasting dispute between sales and marketing departments is that sales teams are generally held accountable for display execution. Conceptually that is not a problem, except in many (probably most) cases the sales feels that the syndicated data figures for display support are understated. The research we’ve done shows that the low sample sizes generally used to calculate ACV display levels are not accurate enough to distinguish differences between support levels in the low 40s compared to the high 60s. This is particularly true in display-reliant categories like cough/cold and sun care. A better way to measure display support is to rescale the data into buckets – low, medium and high, or just low and high – to get a better sense of what display support looks like. In our protocol, we flag ACV display levels above 30% as high and below 30% as low, but this varies by category.
Distortions from Digital Coupons – More and more, you’ll see percent on deal levels above 90% because of digital ads and coupons since they are often featured 52 weeks per year for a specific brand. When there is no cost of distribution, why wouldn’t retailers provide these offers all the time? This of course inflates the volume that appears as promoted. Further, we’ve seen evidence that digital coupons don’t move a lot of units for the dollars spent. So that’s the double whammy; sales on deal measures are inflated from a promotional vehicle that doesn’t move very many units.
Too Many Promotional Measures - It’s common for larger custom databases to have over 200 different syndicated promotional measures, and this is way too many! There are a few quality measures buried in those hundreds of measures that refer to quality features and displays. The problem is that they are often not accurate, not clearly defined and not many people know about them. Although these metrics do get to a qualitative execution measurement, they do not measure the effects of promotions, unlike the incremental lift per event measure mentioned above. We can really just focus on four, maybe five, measures that will give you 90%+ of what you need to evaluate your promotions: Percent ACV with TPR only, Percent ACV with Feature Only, Percent ACV with Display Only and Percent ACV with Feature and Display. If digital ads haven’t defiled your Percent of Units on Deal, you can throw that one in there, as well, to measure the relative reliance on promotions for your brand.
Rewards Programs Not Captured in the Analysis - These metrics do not pick up rewards, offers or overlays. Much of the loyalty marketing executed in retail outlets affects the net price. For example, a regular price is featured but by applying a $3 FSI, the price comes down 25%, which effectively becomes a TPR that doesn’t get picked up. We note that most of these types of offers don’t usually work that well (exception would be the FSI call outs that sometimes move the needle) but you do need to be able to have the data and ability to do the analysis on your own. That said, there are no syndicated data measures to do this right now.
How Do We Solve This Problem?
We developed the TABS PromoInsight™ tool specifically to address the issues outlined above.
Then we created our own measure called average events, which covers ACV coverage and breadth of items within the line being promoted. To strip out the cost, we can work with syndicated and can overlay tracking services like FeatureVision® to give clients the ability to measure their events better.
For a deep dive on all things trade promotion, start with this post we wrote in 2014 and work your way through the series to learn a better way to plan, execute and analyze your trade promotion spending.