1 / 41

Real Numbers for Real Dialogue

Real Numbers for Real Dialogue. Improving Customer Dialogue through Customer Focused Service Measurement September 20, 2007. Real Numbers for Real Dialogue West Bend Mutual Insurance Scott Grinna – Director of IT Administration September 20, 2007. Agenda. Who is West Bend Mutual Insurance

moe
Download Presentation

Real Numbers for Real Dialogue

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real Numbers for Real Dialogue Improving Customer Dialogue through Customer Focused Service Measurement September 20, 2007

  2. Real Numbers for Real Dialogue West Bend Mutual Insurance Scott Grinna – Director of IT Administration September 20, 2007

  3. Agenda • Who is West Bend Mutual Insurance • Why We Measure Service • Our Evolving Service Measurement Program • Making “Availability” Real • Lessons Learned • Q&A

  4. Who is West Bend Mutual Insurance

  5. Introduction to West Bend Mutual Insurance • Providing Midwest policyholders with sound property and casualty insurance since 1894. • Today 979 associates and over 600 independent insurance agencies serve customers in the Midwest. • Since 1973, rated A+ Superior by A.M. Best, financial analysts of the insurance industry; only 500 of the 3,343 property/casualty insurance companies rated are rated A+ or A++. • Named to Ward’s Top 50 Benchmark Group since 1997. • Able to build competitive advantage for our agents and company by coupling superior products with Legendary Service in a way that enhances relationships.

  6. West Bend Mutual Insurance Technology Philosophy • Technology is an enabler of business strategies. Business comes first. • We embrace technology as a tool to provide competitive advantage to our agencies and WBM. • The greatest productivity and service enhancements are found in agency automation offerings. • Internally, we use technology to increase efficiency with imaging and an automated workflow system. • West Bend Connect is WBM’s business-to-business offering for our agents, associates, and vendors that allows them a service option that’s growing weekly in capability and utilization. • Our West Bend Connect user interface works well for our agents since they help design them.

  7. West Bend Mutual Technology Environment • Two Wisconsin Offices • 212 Full-time IT Associates • 979 internal customers and 2200+ external customers • 127 Field Associates working remotely

  8. West Bend Mutual Technology Environment(Continued) • Critical Applications across varied platforms: • Spending on automation outpaces closest competitors by .86% of revenue.

  9. Why we measure service

  10. All professions are a conspiracy against the laity. - George Bernard Shaw

  11. Initiating Service Delivery Measurement at West Bend Mutual • Redefined in 2000 as part of sweeping effort take system performance to the next level. • Initiated “System Availability” metric. • Implemented internal Service Level Agreement (SLA) and metric with end users. • Integrated metrics into IT division dashboard, reported at the division and corporate levels.

  12. Our evolving service management program

  13. IT Metrics Principles • Ultimately, our customers determine the value provided by IT. • We will measure ourselves against our customers’ expectations. • All stakeholders will understand and agree to selected metrics. • We will use metrics to – • Understand the past • Manage the present • Plan the future • As customer expectations change, so will our metrics.

  14. Metrics Program Approach Organize stakeholder groups Define what matters most - value Define meaningful metric Find out where we are Review baseline and define targets with stakeholders Integrate into ‘Dashboard’ and rewards systems Conduct periodic reviews with all affected stakeholders

  15. Operation Project Portfolio “IT Dashboard” Current Metrics and Engagement

  16. IT Dashboard Evolution Status Quo “Not Sure” Operations Performance and Efficiency “Be reliable” Support “Keep us productive” IT Services “Move us forward”

  17. Making “Availability” Real

  18. Getting to Real Availability Problems with System Availability metric • Not equating to end user perception • Up/Down vs. Response • System vs. Application

  19. System Monitoring (e-mail) Cables Network Switch MailSweeper Exchange Windows 2000

  20. System Monitoring (e-mail) Cables Network – 99% Switch MailSweeper Exchange – 99% Windows 2000 – 99%

  21. Application Monitoring (e-mail) Application Availability Total = 97% Cables Network – 99% Switch Exchange – 99% MailSweeper Windows 2000 – 99%

  22. Getting to Real Availability Problems with System Availability metric • Not equating to end user perception • Up/Down vs. Response Time • System vs. Application • Not customer segmented • Reliance on End User feedback • Response time perception • Consistency • Delays • Difficult to get ‘complete picture’ • Whichapplications degraded? • When did degradation start? • How much did performance degrade?

  23. Cables Network Switch MailSweeper Exchange Windows 2000 Getting to Real Availability To “fix” System Availability metric problems, we could… • Develop a model that captures all application and system component interdependencies. • Monitor system components called out by the model. • Use model and monitoring to extrapolate end user experience. But… • Cost • Time • Accuracy

  24. Our goals • Measure business transactionresponse time for all critical applications. • Measure continuously. • Respond before end users call. • Report breakdown by: • Business Unit • Application • Transaction • Continuously improve performance.

  25. Our approach • Acquire and implement a tool that measures business transaction response time – from “click” to response. • Monitor continuously, not on-demand. • Integrate monitoring alerts with Incident Management. • Integrate reporting with Problem Management. • Structure reporting to the intended audience. • Incorporate measurements into division reports, corporate reports, and performance objectives.

  26. The process • Script business transactions for mission critical applications. • Define thresholds for each business transaction: • Performance • Give Up (not available) • Continuously execute scripts on “Agent” PC’s. • Agent PC’s report on transaction response times and send alerts to Support Staff desk when thresholds are exceeded. • Transaction response time measurements are aggregated to application performance and availability metrics. • Devise and distribute reports geared to stakeholder needs.

  27. Techniques • Tools Overview • Agent PC’s execute scripts. • Reporting server collects data and provides native reporting. • Data is extracted to data warehouse for custom reporting. • Defining what to monitor Engage Business Units to: • Identify top 5 mission critical applications • Identify representative transactions for each application. • Establish thresholds • Performance • Give up • Implementation • Stay application focused • Data • Define data to be used. • Segregate from other production reporting. • Monitoring • Alerts • Performance Alerts: Use 2 consecutive alerts as trigger for support response. • Validation: Service Desk staff attempts to recreate error prior to escalation.

  28. Service Reporting • Real business transaction information

  29. The Information

  30. The Information

  31. Service Reporting • Real business transaction information • Live application status

  32. Service Reporting • Real business transaction information • Live application status • Application Scorecards • By Company • By Business Unit • By Transaction • Application Performance Trends • By Company • By Business Unit • By Transaction

  33. The Results • Monitoring today: • 10 applications monitored for 10 business units • 250 distinct business transactions monitored • 12,800 transactions per day • Customer results: • Improved IT credibility • Improved communication – same language • Increased engagement – customer requests for monitoring • Incident and Problem Management: • Improvements in Proactive Responses • Troubleshooting Improvements • Improved Problem Management

  34. Lessons Learned

  35. Availability Lessons • Challenges: • IT buy-in is harder than end user buy in. • Breaking down functional barriers. • Integration with change management • Monitoring as a production service • Staffing commitment • Data Warehousing commitment

  36. Availability Next Steps • Integrate Service Priorities with Business Continuity Priorities • Integrate Service Priorities with IT Cost Management Strategies • Integrate Service Monitoring with Component Monitoring

  37. Measurement Program Lessons • Metrics conversations with customers are always about IT value • Engage customers to determine what’s important to measure • Offer different views of metrics to different levels of stakeholders • Think ahead on what behavior you might drive, based on how the metric is constructed • Limit the number of metrics utilized by any given group • Make the metrics understood

  38. Measurement Program Lessons(continued) • Begin with WHAT you want to measure, then determine HOW • The point tells you where you are, but the trend tells you where you are going • Maintain communication with all stakeholders • Work diligently to close any gaps between quantitative and qualitative results. • Measure consistently • Change your metrics as your business changes

  39. Questions and Answers

More Related