1 / 29

“Ensuring successful CRM implementations – can Model Driven QA make a difference?”

“Ensuring successful CRM implementations – can Model Driven QA make a difference?”. A Webinar by eBay and Cognizant 7 th October 2010. Survey - 1. Q. If your organization has undertaken Siebel implementation in the last 2 years, are you satisfied with the way it was implemented?.

efia
Download Presentation

“Ensuring successful CRM implementations – can Model Driven QA make a difference?”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “Ensuring successful CRM implementations – can Model Driven QA make a difference?” • A Webinar by • eBay and Cognizant • 7th October 2010

  2. Survey - 1 • Q. If your organization has undertaken Siebel implementation in the last 2 years, are you satisfied with the way it was implemented? : yes, it went without major issues : no, we were expecting it to go smoother tick one of the following options:

  3. Satisfaction Levels for CRM Projects • *Please agree or disagree with the following statement: • “Business results anticipated from the implementation were met or exceeded” Statistics on failed CRM projects, assessed by leading analyst firms • 2001 Gartner Group: 50% • 2002 Butler Group: 70% • 2002 Selling Power, CSO Forum: 69.3% • 2005 AMR Research: 18% • 2006 AMR Research: 31% • 2007 AMR Research: 29% • 2007 Economist Intelligence Unit: 56% • 2009 Forrester Research: 47% Source: “CRM Failure Rates: 2001-2009”, blog by Michael Krigsman on ZDNet Source: “Answers To Five Frequently Asked Questions About CRM Projects, a report by Forrester Vice President and Principal Analyst, Bill Band published in 2008

  4. eBay Speaker Introduction Steve Hares - Senior Quality Engineering Manager, Release Manager eBay - Customer Support Technology Solutions Steve Hares is the Senior Manager of Quality Assurance for eBay’s Customer Support Technologies Group. He is responsible for the quality metrics for all software that is delivered to eBay’s customer support agents. In the last 6 months Steve was instrumental in establishing the partnership with Cognizant that achieved all quality metrics for deployment to production of eBay’s new CRM solution.  Before eBay Steve has been both a Product Development manager and QA manager for Avaya, Lucent, Ascend Communications, and a host of small start ups. email: stephen.hares@ebay.com

  5. Cognizant Speaker Introduction Rajarshi Chatterjee (Raj), Director and Head of CSP-Testing Raj, heads Customer Solutions Practice - Testing (CSP-Testing), a group was incubated early in 2009 as a new horizontal that combines the expertise of Cognizant’s Testing practice and Customer Solutions Practice (CSP). With 400+ associates, CSP-Testing specializes in testing of CSP applications in the CRM, BPM & CDI space. email: rajarshi.chatterjee@cognizant.com

  6. About eBay and their CRM Program • About eBay Founded in 1995, eBay connects millions of buyers and sellers globally on a daily basis and is the world's largest online marketplace. Our subsidiary PayPal enables individuals and businesses to securely, easily and quickly send and receive online payments. We also reach millions through specialized marketplaces such as StubHub, the world's largest ticket marketplace, and eBay Classifieds, which together has presence in more than 1,000 cities around the world. In 2009, eBay realized $9B. The total worth of goods sold on eBay was $60 billion -- $2,000 every second, and we have 92 million active users at present • About the Unify program The Unify Program was initiated to make eBay’s customer support and service the best in the industry. We wanted to ensure that user experience is the best at all times - from the moment a user (buyer or seller) raises a request, it is researched, till it is resolved.

  7. What was this Program about? After Unify Program Simplifying the service management platform and making it more scalable Before Unify Program SFDC CSI Regional E-mail Enterprise agent tool (case & content mgmt) Application Chat Datawarehouse PDA Web forms, Email, Chat, Phone Channels Activities DW Application Web forms, Email, Chat, Phone Buyer & Sellers Buyer & Sellers SAP Content Regions CSR AD CSR iPOP eWFM Regional Opportunities (e.g., CRM) SoD IVR • Single Case management system, globally • Fewer applications, better integrated • Fewer data silos; hence consistent data • Multiple applications • Multiple definitions & answers • Poor data quality • No global view

  8. Key Program Objectives Goals NPS Resolution Cost Measured KPI Key Metrics Reduced Transfers Agent Utilization Rate Average Handle Time First Contact Resolution Accurate Content Consistent Global Process Adoption Integrated Case Contact History Enablers System Retirement Maintenance / Supportability End State Alignment Cost Factors Consistent member experience Agent efficiency / accuracy Simplified Technology stack Enhanced Reporting Benefits A program of this magnitude was not without its own risks!

  9. Risk Assessment Risk Score of key solution components 60+ functional components evaluated along above dimensions & Level of Effort • Over 60% of the application was scored at High or Moderate Risk. This meant some critical decisions had to be taken right at the beginning

  10. Key Decisions 1 2 3 4 5 6 • MOST IMPORTANT: Bring in expertise where needed – Select the Right Partners!

  11. Key Decisions – Selecting the Right Partner • Rigorous process for Vendor selection • Defined 33 criteria that had clear objectives • Weighted the different criteria based on priority and importance • Set a target score for each criteria • Each vendor rated by all members of a panel to derive weighted scores Critical Differentiator Model Driven approach to QA

  12. Experiences by the SI Partner Speaker Change at this slide

  13. Overview of the Service Request Workflow Incoming Request (Phone) Incoming Request (email) Incoming Request (Chat) Incoming Request (Web form) Account and Contact Creation Member Login and Verification Member Login and Verification IVR Interaction Assignment to agent Assignment to agent Chat session initiation Phone session initiation Assignment to agent Assignment to agent SR creation / classification Agent researches case Siebel passes case context to InQuira Agent resolves case Agent closes case InQuira returns potential solutions / templates based on case context Web form Phone Channel independent Chat email

  14. What made this Implementation Complex? Unlike most other Siebel implementations, by design no user transaction is fully executed from start to finish within Siebel. These other applications were also being developed at the same time as Siebel, i.e. very little ability to test any one system in the presence of other systems Foundational design Block 1: Siebel & Portal customization Block 2: Site integration Block 3: E-mail and web channel integration Block 4: Phone & chat integration Q4 Q1 Q2 Dec Jan Feb Mar Apr May June In short, we had to think how we would test multiple applications, while they were still on the drawing board. The inability to replicate all of these applications in the QA environment at the same time, necessitated an innovative approach. That approach was Modeling!

  15. Model driven approach to QA 3 Components of Model driven testing Modelling for Functional Testing Modelling for Test Data preparation • Ensuring exhaustive coverage • Regression testing • Risk based testing • Test Driven Development QA Approach • For Functional testing • For performance testing Modelling for Performance Infrastructure sizing Single and multi user Load and Performance Testing

  16. Model driven approach to QA Goals for Model driven testing Modelling for Functional Testing • Ensuring exhaustive coverage • Keeping test scenarios in synch with an evolving application • Repeated testing of “weak-links” in the chain • Test driven development – alerting beforeis better thandetecting later Core

  17. Modeling in 3 Easy Steps using ADPART 1 • Define Business Processes Create process flow diagrams in ADPART or import from Visio • Tasks to be executed • Rules • Parameters that determine outcome Input • Pre-conditions • Triggering events • User input Process details Output • Expected response • Messages / Notifications • Triggers to start / stop other processes 2 3 Render Scenarios automatically Set variables & data type Generate Test Scenarios Define Parameters for each step

  18. How did we exploit the model? Automatic comparison between 2 versions of business processes Highlights differences due to: Regression Testing • Newly added steps • Modified steps • Deleted steps Enables creation of multiple Test Suites Probability Simulation Enables forcing of specific paths, by modifying probability at decision nodes: • To test incremental functionalities • Assess risks from exception situations The model is based on “expectations” from the system; its efficacy depends on the richness of the data used to simulate those conditions. This makes Test Data preparation so important!

  19. Model driven approach to QA Goals for Model driven testing Modelling for Test Data preparation • Creating reusable data sets for : • Functional Testing • Load & Performance Testing Core

  20. Data – a Look at all the Types Involved • Transaction, event and rules driven data – data that changes very frequently • Transactions (i.e. interactions through phone, chat, web-forms, emails etc.) • System rendered, event driven updates i.e. history log, audit trail • Calculated fields, values returned from API calls to other applications, data look-up from other sources (i.e. customer rating) • Environment settings • Server settings (i.e. session time-out, no. of re-tries, page and memory settings) • Network parameters and settings • Test specific settings, i.e. number of concurrent users / sessions Using the application, API calls & automation Vendor guidelines, extrapolated using tests results Dummy values or cloned & masked data • Business defined data, that must be set-up initially • Application Master data, i.e. drop-downs, list of values etc. • User data (user groups, roles, and sample user logins) • Application settings: • User navigation rules i.e. IVR menu tree • Phone and Chat routing rules in CCA, Assignment rules in Siebel • Merge rules & de-duplication rules (i.e. contacts and cases in Siebel) Actual values defined by SMEs • Business defined data that is created regularly but changes infrequently • Business entities, attributes and relationships • Customer Name, Profile, Contact information, Acct numbers, dummy credit card numbers

  21. Test Data : Challenges, Options & Results Options Evaluated Results Achieved Challenges • Exhaustive scenario coverage • Data prepared for Training, QA and LnP environments • More than 135 million records created for load testing • Saved license cost of TDM tools • TDM tools require a source database • Complex license cost estimation; overlapping legacy applications • Many complex combinations to test (i.e. IVR menu options, bid items, service request types, agent skill types) • Ensuring consistent data across applications (i.e. Access control, Site, Siebel) • Ensuring coherent data across applications, to reflect real-life end to end scenarios • Identifying boundary conditions exhaustively • Simulating ageing of data • Creating large volumes for load testing • Testing analytical reports • Optim, Datamaker • Datagenerator • In-house developed tools i.e. ADPART & OATS • Automation scripts created in Selenium • Proprietary APIs from eBay • Excel macros created by Cognizant Business Analysts • Custom scripts for data migration & Siebel EIM • Finally Chosen

  22. Model driven approach to QA Goals for Model driven testing Modelling for Load & Performance Testing • Perform to scale, i.e. are expected response times met for each of the transactions? • Scale to Perform, i.e. the extent to which the servers and applications can scale to support the max number of users without performance degradation Core

  23. Perform to Scale and Scale to Perform • Where OEM Data fail? • Per-user memory requirement revised to 50Mb from 8Mb! • AOM & EAI Servers crashed Load Simulation Approach • Nearly 200 million records created via Siebel EIM for performance test • LnP data volume 40% downsized compared to estimated in Production • Load Runner configured for 1350 concurrent Siebel & 500 CCA users • Tests simulated for users in Dublin and Manila, using SHUNRA WAN emulator • Results extrapolated for full data volume and 3600 concurrent users! Identifying and Benchmarking critical User Transactions Sample Response Time Objectives: • Infrastructure sizing validation for final production environment • Creating a bench-mark data point for critical and common user transactions • Identifying single points of failure and design limitations if any • Estimating the optimal number of users that could be supported at peak-hours 23

  24. Perform to Scale and Scale to Perform • Performance compared for 2 different server models M5K and T5240 to decide right database model for Unify Value Additions • Production server sizing revalidated and capacity enhanced based on load test results • Response time for 91% transactions brought down to less than 1 sec • Desired server settings and database indexes determined for poor SQLs • Issues found with server configuration, CTI tool-bar, OM server • Determine optimal settings for Call Canter and EAI Object Manager component Tasks • Identification of optimal Load Balancer parameters Memory and CPU Utilization 24

  25. 10 Learnings 1 2 3 4 5 6 7 8 9 10

  26. Questions?

  27. Thank You!

  28. Generating Test Scenarios with one click!

  29. Defining Variables at each step

More Related