PR Evaluation:Change, what change? Dr Tom Watson The Media School Bournemouth University, UK
Watson 1994 Research amongst IPR (UK) members in 1992: • Practitioners viewed evaluation very narrowly and lacked confidence in promoting evaluation methods to employers and clients • Most relied on output measurement of media coverage • Evaluation was not undertaken because of a lack of time, budget and knowledge of methods • Evaluation was feared because it could challenge the logic of practitioners’ advice and activities • 75% claimed to do some form of evaluation • Little spent on evaluation, with 74.3% spending 0 to 5% • Picture of practitioner as a ‘doer’, not an adviser
Benchpoint 2009 • Many similar results • Continued marginal improvement • 77% claim to measure • Focus on Outputs; AVEs remain • Very splintered methodology – no single method • Little evidence of measurement of online media/activity • Reduction in opinion research – why?
Research priorities - Watson 2008 • Public relations’ contribution to strategic decision-making, strategy development and realisation, and efficient operation of organisations • The value that public relations creates for organisations through building social capital and managing key relationships • The measurement and evaluation of public relations both offline and online
Where next? Gregory & Watson 2008 • There are many robust evaluation methods is existence – why is there a gap between academics and practitioners on this? • There’s no agreement on what ROI is and whether it can be applied to PR/corpcomms. Research needed • Dashboards and scoreboards are increasingly used in practice. Are they a presentation method or evidence of multiple methods used for evaluation? • Is new applied theory needed on measurement of social media and online PR activity?