1 / 33

Retail Advertising Works! Measuring the Effects of Advertising on Sales via a Controlled Experiment on Yahoo!

October 2009. Retail Advertising Works! Measuring the Effects of Advertising on Sales via a Controlled Experiment on Yahoo!. Randall Lewis MIT and Yahoo! Research. David Reiley Yahoo! Research. Preview of Major Results.

tulia
Download Presentation

Retail Advertising Works! Measuring the Effects of Advertising on Sales via a Controlled Experiment on Yahoo!

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. October 2009 Retail Advertising Works!Measuring the Effects of Advertising on Sales via a Controlled Experiment on Yahoo! Randall Lewis MIT and Yahoo! Research David Reiley Yahoo! Research

  2. Preview of Major Results • Our advertising campaign increases sales by about 5% for those treated with ads, producing a healthy estimated return on the cost of the advertising. • Effects of the ads appear to be persistent, perhaps for weeks after the end of the campaign. • The online ads affect not just online sales, but also offline sales. 93% of the effect is offline. • The online ads have a large impact on purchases for viewers, not just clickers. 78% of the effect comes from non-clicking viewers. • Advertising Especially Influences Older Users [companion paper]: No effect on users under 40, 38% of the effect comes from consumers over 65.

  3. Advertising’s effects on sales have always been very difficult to measure. “Half the money I spend on advertising is wasted; the trouble is I don't know which half.” -John Wanamaker (Department store merchant, 1838-1922)

  4. Advertisers do not have good measures of the effects of brand image advertising. • Harvard Business Review article by the founder and president of ComScore (Abraham, 2008) illustrates the state of the art for practitioners: • Compares those who saw an online ad with those who didn’t. • Potential problem: the two samples do not come from the same population. • Example: Who sees an ad for eTrade on Google? • Those who search for “online brokerage” and similar keywords. • Does the ad actually cause the difference in sales? • Correlation is not the same as causality.

  5. Measuring the effects of advertising on sales has been difficult for economists as well as practitioners. • The classic technique: econometric regressions of aggregate sales versus advertising. • Be careful about Mindless Marketing Mix Modeling. • A textbook example of the “endogeneity” problem in econometrics (see Berndt, 1991). • But what causes advertising to vary over time? • Many studies flawed in this way.

  6. We have just seen two ways for observational data to provide inaccurate results. • Aggregate time-series data • Advertising doesn’t vary systematically over time. • Individual cross-sectional data • The types of people who see ads aren’t the same population as those who don’t see ads. • Even in the absence of any ads, they might well have different shopping behavior. • When existing data don’t give a valid answer to our question of interest, we should consider generating our own data.

  7. An experiment is the best way to establish a causal relationship. • Systematically vary the amount of advertising: show ads to some consumers but not others. • Measure the difference in sales between the two groups of consumers. • Like a clinical trial for a new pharmaceutical. • Almost never done in advertising, either in online or traditional media. • Exceptions: direct mail, search advertising.

  8. Our understanding of advertising today resembles our understanding of physics in the 1500s. • Do heavy bodies fall at faster rates than light ones? • Galileo’s key insight: use the experimental method. • Huge advance over mere introspection or observation.

  9. Marketers often measure effects of advertising using experiments… • … but not with actual transaction data. • Typical measurements come from questionnaires: • “Do you remember seeing this commercial?” • “What brand comes to mind first when you think about batteries?” • “How positively do you feel about this brand?” • Useful for comparing two different “creatives.” • But do these measurements translate into actual effects of advertising on sales?

  10. A few previous experiments measured the effects of advertising on sales. • Experiments with IRI BehaviorScan (split-cable TV) • Hundreds of individual tests reported in several papers: • Abraham and Lodish (1995) • Lodish et al. (1995a,b) • Hu, Lodish, and Krieger (2007) • Sample size: 3,000 households. • Hard to find statistically significant effects. • Experiments by Campbell Soup Co. • Experimented across ~30 regions, not by individual. • Even harder to find significant effects. • Our experiment studies 1.6 million individuals.

  11. Our study will combine a large-scale experiment with individual panel data. • We match Yahoo! ID database with nationwide retailer’s customer databases • 1,577,256 customers matched • 80% of matched customers assigned to the treatment group • Allowed to view 3 ad campaigns on Yahoo! from the retailer • Remaining 20% assigned to the control group • Do not see ads from the retailer • Ad campaigns are “Run of Yahoo! network” ads • Following the online ad campaigns, we received both online and in-store sales data: for each week, for each person • Third party de-identifies observations to protect customer identities • Retailer multiplied all sales amounts by a scalar factor

  12. By the end of the three campaigns, over 900,000 people had seen ads.

  13. Descriptive statistics for Campaign #1 indicate valid treatment-control randomization.

  14. We see a skewed distribution of ad views across individuals.

  15. In-store sales are more than five times as large as online sales, and have high variance across weeks.

  16. Sales vary widely across weeks and include many individual outliers.

  17. Not all of the treatment-group members browsed Yahoo! enough to see the retailer’s ads. • Only 64% of the treatment group browsed enough to see at least one ad in Campaign #1. Our estimated effects will be “diluted” by 36%. • We expect similar browsing patters in the control group, but cannot observe which control-group members would not have seen ads. 36% 64% Control Group Would not have seen ads Control Group Would have seen ads 19% Treatment Group Did not see ads Treatment Group Saw ads 81%

  18. Descriptive statistics show a positive increase in sales due to ads. • But the effect is not statistically significant. • One reason is the 36% dilution of the treatment group.

  19. Suppose we had no experiment, and just compared spending by those who did or did not see ads. • We would conclude that ads decrease sales by R$0.23! • But this would be a mistake, because here we’re not comparing apples to apples.

  20. Pre-campaign data shows us that the non-experimental sales differences have nothing to do with ad exposures. • People who browse enough to see ads also have a lower baseline propensity to purchase from the retailer. • Potential mistake solved with experiment, panel data.

  21. Ad exposures appear to have prevented a normal decline in sales during this time period. • Control-group sales fall. • Unexposed treatment-group sales fall. • Treated-group sales stay constant.

  22. Our difference-in-difference estimate yields a statistically and economically significant treatment effect. • Estimated effect per customer of viewing ads: • Mean = R$ .102, SE = R$ .043 • (Standard errors are heteroskedasticity-robust.) • Estimated sales impact for the retailer: • R$83,000 ± 70,000 • 95% confidence interval. • Based on 814,052 treated individuals. • Compare with cost of about R$20,000.

  23. What happens after the two-week campaign is over? Positive effects during the campaign could be followed by: • Negative effects (intertemporal substitution) • Equal sales (short-lived effect of advertising) • Higher sales (persistence beyond the campaign) We can distinguish between these hypotheses by looking at the week following the two weeks of the campaign.

  24. We now take a look at sales in the week after the campaign ends. Previously, we calculated estimates using two weeks before and two weeks after the start of the campaign. Now, we calculate estimates using three weeks before and three weeks after. Recall that the campaign lasted two weeks.

  25. Estimates indicate a positive impact on sales in the week after the campaign ends. • Ads ran for two weeks. • DID examines pre-post differences in sales for treated versus untreated individuals.

  26. Next we estimate separate effects for the effect on offline and online sales. As before, these are DID estimates. We see that 93% of the total effect on sales comes through offline sales.

  27. Do we capture the effects of ads by measuring only clicks? No. • Clickers buy more, as one would expect. • But viewers have an increase in sales that represents 78% of the total treatment effect.

  28. The effect on non-clickers occurs in stores, not in the online store.

  29. The effect on clickers occurs both offline and online. • Those who click on the ads buy significantly more online. • The estimate on offline sales is too imprecise to be statistically significant.

  30. Decomposing the sales difference by age shows increasingly large treatment effects for older users. • Sales difference relative to baseline purchases of ~R$1.75 per person.

  31. Decomposing the sales difference by age shows a large, significant difference for senior citizens. 38% of the effect derives from the 6% of customers, ages 65-90. Summary statistics for senior citizens versus entire population:

  32. Conclusion:Retail Advertising Works! • Online display advertising increases both online and offline sales. Approximately 5% increase in revenue. • Total revenue effect estimated at 4X the cost of the ads. Perhaps more if the effects are persistent over time, or if we have imperfect database matching. • 93% of the increase in sales occurs offline. • 78% of the increase in sales comes from viewers, and only 22% from clickers. • Older consumers respond much more to ads: 40% of the treatment effect comes from the oldest 6%.

  33. I propose a product that automates experiments for advertisers. • Experiments are key to measuring ad effectiveness. • Measuring causal effects correctly. • Resolving “attribution” debates. • Automating the process will make it accessible to many more advertisers. • Help them find the “wasted half” of their advertising. • Provide a service with much better measurement than that provided by any other publisher in any advertising medium. • Many important questions possible to answer: • Effects of targeting • Effects of frequency • Effects of different creatives • Past sales

More Related