1 / 40

AMA Marketing Effectiveness Online Seminar Series

AMA Marketing Effectiveness Online Seminar Series. Marla Chupack American Marketing Association. A wealth of information is available for marketing professionals at www.MarketingPower.com The #1 marketing site on the web. Commonly Asked Questions.

kiley
Download Presentation

AMA Marketing Effectiveness Online Seminar Series

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AMA Marketing Effectiveness Online Seminar Series Marla Chupack American Marketing Association

  2. A wealth of information is available for marketing professionals at www.MarketingPower.com The #1 marketing site on the web

  3. Commonly Asked Questions • 1. Will I be able to get copies of the slides after the event? • 2. Is this web seminar being taped so I or others can view it after the fact? Yes Yes

  4. Putting Brand Measurement Systems to the TestWhat's the right way to measure brand position? Keith Chrzan, Vice President, Marketing Sciences, Maritz Research

  5. Agenda • Background – uses and limitations of attribute ratings scales • Alternatives to Likert rating scales • Recent research • Empirical studies • Research design • Planned comparisons • Results • Conclusions • Summary and recommendations

  6. Polling Question In your brand image studies, what kinds of measures to do use for band image attributes? Please check all that apply. [ ] Likert scale (e.g. strongly agree to strongly disagree) [ ] Performance rating (e.g. excellent, very good, good, fair, poor) [ ] Describes scale (e.g. scale ranges from “does not describe at all” to “describes completely”) [ ] Semantic differential scale [ ] Other rating scale [ ] Pick any scale (e.g. check all that apply) [ ] Pick K (e.g. pick the top 3 or top 5) [ ] Rankings (complete rank order) [ ] Partial rank order (e.g. rank the top 3 items) [ ] Max-diff scaling [ ] Other scaling method [ ] None – we don’t measure brand image

  7. Background • Standard/ideal outputs of brand image research (AKA brand positioning, brand equity, brand choice research) • Analysis of between-brand differences (t-tests/ANOVA) • Analysis of relative brand positions (perceptual mapping with discriminant analysis or MDS) • Identify the drivers of brand choice (regression analysis/multinomial logit) • Ideally a brand image measurement system should produce • credible brand positions (face validity) • strong differences among brands (discriminant validity) • powerful predictions of brand choice (predictive validity)

  8. Background • Typical data structure – rating scales Brands ABC . . . j Attribute 1 5 3 4 . . . 2 Attribute 2 4 2 1 . . . 5 . . . . . . . . . . . . . . . . . . . . . . . . Attribute k 3 3 2 . . . 5 Choice 0 0 1 . . . 0

  9. Background • Attribute rating scales used as • Measures of brand position • Use to plot location in perceptual space • Use to test for between-brand differences • Use to test for over-time differences in tracking • Use as independent variables in predictive model of brand preferences • Commonly used are • Likert scales (5 point agree-disagree scales) • Performance ratings (excellent, very good, good, fair, poor) • Description scales (5 or 7 points from “describes completely” to “does not describe at all”)

  10. Background • Common problems with these scales • Brand halo effect leads to • Lack of strong differentiation among brands • Correlated attributes • See Dillon (2001) for an analytical way to remove halo effect (but it works only at the aggregate level, and does not remove the halo from respondent-level data) • The small differences between brands and sizeable correlations among attributes produce weak predictive models, few significant predictors

  11. Recent Studies • Whitlark and Smith (2005) compared ratings data and “Pick K” or constrained pick data (“which 3 attributes do you most associate with Volvo”) • But they only collected ratings and merely inferred the pick data, assuming identical cognitive processes that should have been tested • They assess only face validity of brand positions, not discriminant or predictive validity • They do not identify their sample size

  12. Recent Studies • Driesener and Romaniuk (2006) compare ratings to rankings and pick any • Small sample size of 105 respondents • Compare only simple univariate measures of brand position • Fail to compare multivariate measures of brand position (perceptual maps), discriminant or predictive validity

  13. New Empiricial Research • Our studies remedy the shortcomings of this recent work • Large sample sizes, part of commercial marketing research studies • Test of a broader range of scaling methods • Assess face validity plus quantitative tests of discriminant and predictive validity

  14. Two Empirical Studies • Study 1 - compare 3 image measurement systems • Likert ratings • Comparative ratings • Best-worst measures of brand position • Study 2 – compare • Pick 3 data • Comparative ratings

  15. Planned Comparisons • Power to detect between-brand differences – ANOVA F statistics • Perceptual maps • Prediction with MNL choice model • Same or different coefficients? • Model fit – McFadden’s r2 • Number and interpretation of significant predictors

  16. Comparative Rating Scales • Wendy’s has • Much faster service than other brands • Somewhat faster service than other brands • About as fast of service as other brands • Somewhat slower service than other brands • Much slower service than other brands • May avoid a positive bias, and should return a more normal distribution of responses • Comparative anchors may be more objective • More meaningful differences among brands may occur on the attributes • This may help predictive modeling

  17. Max-diff Measures • With which ONE of the following statements do you most agree and with which do you least agree? FeatureMostLeast McDonalds has fast service [ ] [ ] Taco Bell has low prices [ ] [ ] Subway has convenient locations [ ] [ ] Wendys has healthy food [ ] [ ] Subway has good tasting food [ ] [ ] Taco Bell has a wide selection of food [ ] [ ] Subway has clean dining rooms [ ] [ ] McDonalds has friendly service [ ] [ ] Wendys has valuable deals and coupons [ ] [ ]

  18. Pick 3 • Which 3 of the following attributes do you associate with <BRAND>? [ ] Safe [ ] Features [ ] Price [ ] Performance [ ] Ops [ ] Fit [ ] Env [ ] Looks [ ] Country

  19. Empirical Studies • Research design • Results

  20. Study 1 Research Design • 443 respondents screened to dine at fast food restaurants at lease once a week • Web-based survey • Half of respondents received Likert scale ratings of 4 brands on 9 attributes • The other half received comparative rating scales for 4 brands and 9 attributes • All respondents also received Max-diff questions • 9 attributes, 4 brands • Half of these respondents received ratings before, and half after, the Max-diff questions

  21. Discriminating Power • Power to detect between-brand differences – RM ANOVA F statistics LikertComparativeMax-diff Fast service 5.7 5.1 12.1 Low price 29.8 29.9 131.3 Convenient location 48.1 84.0 244.8 Healthy food 138.1 209.0 432.5 Good taste 18.7 26.0 81.1 Wide selection 12.5 16.5 63.0 Clean dining room 17.4 19.1 123.7 Friendly service 6.0 11.0 62.2 Coupons & deals 10.1 14.4 51.7 Average F 31.7 46.1 133.6

  22. MDPREF Map for Likert Ratings Good taste Wendy's Friendly service Taco Bell Clean dining room Low price Fast service Wide selection Healthy food Subway McDonalds Good deals/coupons Convenient location

  23. MDPREF Map for Comparative Ratings Taco Bell Wendy's Good taste Low price Friendly service Healthy food Wide selection Clean dining room Subway Fast service Good deals/coupons McDonalds Convenient location

  24. MDPREF Map for Max-diff Taco Bell Wendy's Good taste Low price Friendly service Healthy food Clean dining room Subway Wide selection Fast service McDonalds Good deals/coupons Convenient location

  25. Results from Perceptual Maps • Similar brand positions, except that the attribute “fast service” differs substantially across methods • Wendys is the highest rated brand on fast service on the Likert ratings (Subway is the lowest) • Subway and Wendys are highest on comparative ratings • McDonalds is the highest on BW-BAC • Fast service has the lowest F statistic on the ANOVA table, and is barely significantly different across brands for the rating scales measures

  26. Results – MNL Prediction • Prediction - Fit statistics LikertComparativeMax-diff McFadden’s r2 .253 .353 .292 • The comparative ratings do the best job of predicting brand choice (brand used most often) • Similar results for predicting brand share

  27. Results – MNL Prediction • Prediction - Number and interpretation of significant predictors LikertComparativeMax-diff Fast service .21 .33 .02 Low prices .23 .72 .05 Convenient locations .42.63 .19 Healthy food .20 -.11 .01 Good tasting food 1.34 1.19.52 Wide selection of food -.04 -.21 -.08 Clean dining room .05 .71 -.01 Friendly service .14 -.20 -.02 Valuable deals/coupons .12 .08 -.02

  28. Results – MNL Prediction • The different attribute ratings yield different derived importance models • Comparative ratings do the best job of predicting brand shares, have the most significant drivers • Who thinks price, speed of service and cleanliness aren’t important?

  29. Study 1 Conclusions • All three methods produce similar perceptual maps (except for the “fast service” attribute, on which brand positions vary by method) • Both comparative ratings and Max-diff outperform Likert ratings in terms of predictive validity and discriminant validity • Comparative ratings have apparently greater predictive validity than the Max-diff measure, and yield a predictive model with greater face validity • Max-diff has much better discriminant validity than do comparative ratings

  30. Study 2 Research Design • 430 respondents screened for near term purchase in unmentionable consumer durable category • Web-based survey • 7 brands, of which a respondent evaluated 4 on each of 9 attributes • Half of respondents completed each brand attribute evaluation task • Comparative ratings • Pick 3

  31. Results • Power to detect between-brand differences – p-values ComparativePick 3 Features .02.03 Price .07 .41 Performance .01 .34 Ops .00 .01 Fit .09 .19 Env .00 .53 Qual .00 .03 Safe .02 .53 Looks .27 .07 Country .97 .01 Average p .14 .26 # significant 6 4

  32. MDPREF Map of Means N Looks Y Qual Safe Feat Ops Price T V F Fit Country Perform H Env D

  33. Correspondence Analysis Map of Pick 3 Env Qual T Feat N Safe V F Perform Fit H Price Looks Ops D Y Country

  34. Results from Perceptual Maps • Comparative ratings have the richer map, because it has the more significant attributes - Pick 3 map is more impoverished • The maps have different interpretations, but this occurs chiefly where attribute significance differs

  35. Results – MNL Prediction • Prediction - Fit statistics ComparativePick 3 McFadden’s r2 .358 .090 • The comparative ratings again do the better job of predicting brand choice (brand most likely to purchase)

  36. Results – MNL Prediction • Prediction - number and interpretation of significant predictors ComparativePick 3 Features 1.55 - Price .65 - Performance .38 -.99 Ops .53 - Fit - - Qual - - Safe - -.71 Looks - -.95 Country .61 -

  37. Results – MNL Prediction • The different attribute measures yield different derived importance models • The model for Pick 3 is obviously inferior

  38. Study 2 Conclusions • The methods produce different perceptual maps • Comparative ratings have greater discriminant and predictive validity than does Pick K scaling

  39. Overall Conclusions • Comparative ratings seem to be the most valid of the 4 brand scaling methods tested • Sensible perceptual maps • Credible drivers of brand choice • Greater discriminant validity than all but Max-diff scaling • Greatest predictive power • Max-diff may merit further testing • Next up: test pick any and semantic differential scales against comparative rating scale

  40. Thanks for your time and participation today! To replay this webcast: Go to www.MarketingPower.com For copies of today’s presentation: Go to http://www.maritzresearch.com/brandmeasurement/ To contact today’s speaker: Keith Chrzan Keith.Chrzan@maritz.com Questions for AMA: Marla Chupack MarlaChupack@ama.org To join the American Marketing Association: Go to www.MarketingPower.com/join

More Related