1 / 42

The Validation of Internal Ratings Systems

The Validation of Internal Ratings Systems. Paul Waterhouse, Managing Director, Global Head of Analytics Standard & Poor’s Risk Solutions. December 14, 2006 Athens. Introduction.

xander
Download Presentation

The Validation of Internal Ratings Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Validation of Internal Ratings Systems Paul Waterhouse, Managing Director, Global Head of Analytics Standard & Poor’s Risk Solutions December 14, 2006 Athens

  2. Introduction Standard & Poor’s Risk Solutions wishes to share with you, some of the experiences we have gained since 2001 through assisting banks, insurers, utilities and corporates to validate their internal ratings systems. In particular we intend to focus today on: • What constitutes a robust framework for validation • Typical issues that we have encountered when assisting in the validation of SME and other models

  3. Agenda • The Validation Challenges facing Banks • A Framework for Validation • Typical problems encountered in validating models for SMEs and other sectors • Specific challenges in the validation of low data sectors

  4. 1. The Validation Challenges facing Banks

  5. Quantitative Validation • Back Testing • Benchmarking • Model Designing • Data Quality • Monitoring & Reporting • Use Test Qualitative Validation Validation Challenges Many parties can carry out quantitative validation …some can even do it well …but the qualitative validation is where most deficiencies exist

  6. Validation Challenges • Basel 2 requires the establishment of robust protocols for the construction, testing, use and surveillance of models for risk assessment. • While these protocols are sound they also impose a level of rigour that exceeds that historically employed by many institutions – rendering model development both time consuming and more challenging than in the past. • Specific practical challenges include: • Demonstrating the suitability of construction / testing data • Dealing with incomplete or inadequate data • Validating the conceptual soundness of assumptions made • Ensuring the conceptual soundness of the final model • Validating performance in low data sectors

  7. Validation Challenges • Data challenges: the data available for the construction and testing of models typically exhibits shortcomings: • Inconsistency in terms of default definition to be used • Unclean or incomplete data samples • Non representative of the likely future risk profile or types of risk that will be assessed with the model. • Lacking full statistical credibility for building models solely on the basis of past experience. • Extraction of data (even where it exists) may be difficult or costly and time-consuming

  8. Validation Challenges Conceptual soundness issues: • Models frequently built independently of credit-risk experts may fail to capture all pertinent risk differentiators due to deficiencies in the data sample used for model building • Assumptions made (in absence of data) may be dubious or difficult to validate • Difficult to validate plausible models and assumptions when data is scantly. Performance validation: • Challenges in demonstrating that a “model” works well when the volume of actual defaults is not statistically credible.

  9. Validation Challenges In summary, most validation challenges are about DATA! Poor data …too little data …unrepresentative data…too much focus on the data at the expense of conceptual soundness A rigorous models construction process for any sector requires: • Analysis and comprehension of the data available • Adjustments to that data so that it is consistent with the types of risk that the model will be used to evaluate. • Expert knowledge of the inherent risk factors that will affect risk assessment • Avoidance of too much reliance on the data itself rather than the conceptual risk drivers of the sector itself. This applies equally to retail and SME risks – not just specialised finance, financial services or large corporates!

  10. 2. A Framework for Validation

  11. Validation Framework • “Validation” involves not only a comprehensive review of the structure, calibration, performance and operation of an internal rating system (IRS) but also scrutiny and confirmation that a coherent, comprehensive construction process was followed to design the IRS and that the IRS is methodologically sound. • A validation framework describes: • How To independently validate the internal rating system based on the bank’s own data, methodology an other choices. • It documents quantitative, qualitative and practical risk controls that together demonstrate the robustness of the bank’s rating system. • It also describes the process through which validation results are reported and acted upon when necessary. • Regulators expect an independent validation of the bank’s rating system to occur and reviews the bank’s validation framework and results during the bank’s IRB application.

  12. Evaluates Supervisory examination Internal validation by individual bank Validation of rating process Validation of rating system data quality model design risk components Reporting and problem handling Internal use by credit officers Back testing benchmarking PD LGD EAD Validation Framework Regulatory Reference The validation framework encompasses ALL these aspects: Source: Working Paper No. 14, Basel Committee – February 2005

  13. Validation Framework Standard & Poor’s Risk Solutions typically provides services to: • Review a bank’s existing Validation Framework for designated asset classes and models and make recommendations for enhancement if required • Deliver our proprietary “best practice” Validation Framework • Execute agreed parts of the Validation Framework with the Internal Validation Team. • Address issues (if any) arising within the context of an assignment

  14. Validation Framework • The first phase involves the delivery of a comprehensive, best practice validation framework adapted to the banks’ needs and taking into consideration the characteristics of the asset classes and lending processes concerned. • The framework covers PD and LGD methodologies and models for retail, SME, large corporate, financial institutions, specialised finance. • It includes protocols for quantitative models, expert judgement methodologies, third-party aligned and non-aligned methodologies. • The framework specifies HOW TO conduct the internal Validation: • The steps and metrics required to review the constructional and methodological aspects of rating tools • The steps, tests to perform, and interpretation of results for assessing the performance of rating tools particularly when few or no defaults occur. • Protocols and procedures for monitoring and measuring compliance with the credit rating frameworks and processes and the intended utilisation of rating outcomes. • Reporting and documentation requirements

  15. Validation Framework • S&P RS has partitioned the validation process into the following components: CONTENT: 1. Accepted Validation Principles 2. Developmental Review Principles 3. Methodology Review Principles 4. Review Principles 5. Application Review Principles 6. Data Review Principles 7. Processes Review Principles 8. Utilization Review Principles 9. Positioning within the bank’s Risk Governance Framework

  16. Validation Framework Each section drills into generic process flow: 3. Methodology Review Principles 3.1 Models based on internal historical default data 3.1.1 Review Purpose, Scope and Definitions 3.1.2 Description of tests to be conducted and associated objectives 3.1.3 Guidance for interpretation of test results and action triggers 3.1.4 Tests frequency 3.1.5 Test reporting format 3.1.6 Test review and follow up action monitoring process

  17. Validation Framework The test section describes how bespoke state of the art tests are built: 3.1.2 Description of tests to be conducted and associated objectives 3.1.2.1 Homogeneity of risk profile 3.1.2.2 Default and loss characteristics 3.1.2.3 Suitability of data 3.1.2.4 Replicability/documentation of process 3.1.2.5 Identification of assumptions made 3.1.2.6 Justification for techniques used 3.1.2.7 Assumptions monitoring process 3.1.2.8 Adequacy of performance measures used 3.1.2.9 Consistency with other models

  18. Validation Framework Specific guidance is provided on how to approach the risks analysed 3.1.2.6.1 Risk dimension assessment level - illustration The table below is for illustration purposes only Model: Electronic Manufacturing Review: Risk dimension review Risk dimension: Competitiveness Objective: The objective of this level of the methodological review is to examine the adequacy of the Information used to assess individual obligors. In this respect the review focuses on the following aspects:

  19. Validation Framework Suggestions are made to organize findings: (Illustration 1) 3.1.2.7 Assumptions Monitoring Review The table below documents the the Assumption review results:

  20. Validation Framework Suggestions are made to organize findings: (Illustration 2) 3.1.2.8 Adequacy of performance measures used 3.1.2.8.1 Statistical models based on internal default data

  21. Validation Framework

  22. Validation Framework Construction review • Before considering the performance of risk assessment models and processes, the initial considerations in the validation process is a review of the developmental and deterministic evidence. The intention is to answer the question: “Could the rating system be expected to work reasonably if it is implemented as designed?” • The developmental or construction review involves examining the rigour of the development process itself (data collection, storage, division of duties, college of experts, etc..) as well as the adequacy of documentation pertaining to construction. Conceptual soundness or methodological review • This review involves analysing the robustness of the credit risk assessment methodology from a theoretical perspective. The intent of this review is to address the question of whether (theoretically) the credit risk assessment framework ought to work. In other words does the methodology make sense from a conceptual perspective?

  23. Validation Framework Construction and methodology reviews examine: • The data employed in the construction process. • The nature of the construction process. • The segmentation of the portfolio by risk grouping. • The risk dimensions and credit risk factors employed • The weights (or other algorithms) used for combining risk factor scores to arrive at a final rating and estimate of probability of default. • The calibration and testing processes employed. • The rules of use and application of the credit risk assessment methodologies. • The documentation detailing the theoretical underpinning of the credit risk assessment methodology.

  24. Validation Framework Performance review • Intention is to determine how well the framework performs in practice. • Testing is based on a series of statistical tests aligned to the models’ objectives and horizons. • Tests focus on the rank ordering power, predictive power and stability of the models being validated. • Tests output need to be interpreted in the light of the quality and quantity of data available to build and back-test models. • Back-Testing is the practice of comparing predictions to outcome and is applied across portfolio segments, across time and across rating category. • Such a review is much less meaningful for low data sectors since the observed number of defaults will rarely, if ever, be of sufficient volume as to render such a test statistically credible. The application of additional performance tests is critical

  25. Validation Framework Back-Testing • Back testing involves a historical comparison of actual performance against expected performance. • For PD models the emphasis of back testing is on comparing observed defaults versus expected defaults over time. • Such testing is a critical component of the performance review – and of the validation itself – for models that are experience-based. • “Back-Testing” implies that default data is available. When this is the case common statistical back-tests are used.

  26. Validation Framework Benchmarking • Benchmarking involves comparing the outcomes of the model against independently produced results. • The chosen benchmarks for comparison may include internal as well as external references. • Benchmarks can include peer group results, comparisons against the results of other tools and models, external ratings and second expert opinions. • However, any benchmarks adopted should be carefully screened to ensure they are pertinent and relevant for the performance review so that misleading conclusions are not drawn from the comparisons. • The conclusions of any benchmarking exercise should be based, where possible, on statistical and measurable tests of the internal rating system’s performance.

  27. Validation Framework Application review • The intent of this component of the validation framework is to analyse and review whether: • The credit risk assessment processes used to formulate a risk assessment have been followed as was intended, • Compliance to the above processes are monitored, • Models shortcomings are identified and communicated to end-users in a timely and practical fashion.

  28. Validation Framework Model application review • Have the intended “checks and balances” of model results been applied as intended? • Are overrides comprehensively monitored and communicated and the rationale underpinning the override appropriately documented? • Is additional scrutiny conducted (as recommended in the methodology) for risk estimates that deviate substantively from the sector median, very large exposures and other non-standard assessments? • Is the correct model always used for each asset class? • Are exceptions recorded appropriately and communicated? Is there any evidence of users modifying risk assessment outcomes to “side step” parts of the process that may be regarded as onerous or politically undesirable? • Are minimal informational requirements to assign an internal risk assessment strictly adhered to? Are exceptions identified, communicated and rationalised?

  29. Validation Framework Process compliance review • A review of the processes put in place to ensure the credit risk assessment framework is effectively employed as intended: • Are processes followed in practice? • is compliance with such procedures and processes effectively monitored? User manual Review • Particular emphasis is placed on the communication of user guides to those that will employ specific models and training of users. • User handbooks should exist and incorporate: • A precise definition of a model’s scope • Details of the surrounding processes, protocols and procedures to be employed. • A description of model inputs required. • A description of model outputs, their financial interpretation and their intended application • A description of model shortcomings / limitations

  30. Validation Framework Data review • There is a need to ensure data is accurate, complete, secure, sufficient and representative of the risk profile being assessed. Accordingly validation should seek to verify: • That appropriate validation checks exist and are applied in practice for any data used as inputs to models or tools • That robust feasibility checks are in place. • That analysts adjustments to raw or automated inputs to models are centrally saved, reviewed prior to approval and that the rationale underlying the adjustment is documented. • That minimum data requirements for assignment of internal ratings are established, communicated and generally adhered to. • That the quantity and quality of data used to construct and validate processes and models is robust & appropriate • That the manner, in which data (e.g. internal rating results, credit risk factor scores, overrides) are stored, is appropriate for other validation reviews. • That data requirements for validation purposes, can easily be modified and expanded as risk assessment models evolve or validation needs expand. • This part of the validation review also needs to examine data management processes in a broad manner.

  31. Validation Framework Utilisation review • This review is to verify that internal ratings or risk assessments are being employed as intended. • Do end-users have confidence in the rating system quality? • Do they know how to use it? • The review also examines of application of risk assessments: • Within pricing decisions • Within business decisions (accept / not accept) • For regulatory and economic capital allocation (credit limits) • For monitoring processes • For Senior Management performance review and strategic planning • Documentation of exceptions should be robust with supported rationale. The bank’s Use tests need to be consistent with its internal risk governance and the role of senior management.

  32. Validation Framework Validation process This part of the framework deals with: • Surveillance or review frequency • Interaction of model results and human judgement • Evolution of the credit risk methodology • Technological environment Roles & Responsibilities

  33. 4.Typical problems encountered in validating models for SMEs and other sectors

  34. Typical Validation Problems – SME & Other Sectors General Issues: • Documentation or data provided for third-party models often insufficient and needs “back filling” • Especially that related to the construction of the model, adjustments applied to data, inherent methodology (or conceptual soundness) and performance. • Third-party model employed when in-house data available • Increasing Regulatory preference for in-house model development (when data is sufficient) and for third-party models to be used for benchmarking only. • Conceptual unsoundness • Data for model building not appropriate for intended types of risk to be evaluated and data not adjusted.

  35. Typical Validation Problems – SME & Other Sectors • Dubious internal calibrations and adjustments • Definition of default in data versus that intended without adjustment to data or outcomes • Point-in-time versus Through the Cycle adjustments poor • Mapping / quantification of PD outcomes inappropriate • Conceptual unsoundness • Model trained on past data but no interpretation of data or results – final model has counter-intuitive results or inter-relations that cannot be explained and were not investigated • Important intuitive credit risk factors omitted as they need not manifest their importance due to short time period of sample • Important intuitive credit risk factors omitted as they need not manifest their importance due to sample being too homogeneous relative to intended model application

  36. Typical Validation Problems – SME & Other Sectors • Incomplete construction process • Impact of changing economic conditions and stress scenarios not investigated and incorporated • Assumptions made in models without investigation, rationale, consideration and impact study of alternatives • Sensitivity testing not conducted during process • Segmentation process • Based solely on historical data analysis without conceptual review and rationale • Not fully documented

  37. 5. Specific challenges in the validation of low data sectors

  38. Low Data Sector Challenges Issues surrounding a purely quantitative approach: • Qualitative risk factors • Parental support issues • Relative versus absolute financial factors • Market “norm” profiles • Finite size of rated population • Evolving market dynamics • Interaction of credit risk factors

  39. CASE STUDY Major Global Bank In 200X, a major global bank commissioned third party vendors to develop internal ratings frameworks for a number of low-data sectors in which the Bank was active. The objectives of the bank were: • To improve its risk management practices in advance of the developing Basel 2 capital requirements • Quantify default risk for pricing, economic capital allocation and risk management. In view of the lack of a statistically significant and pertinent volume of historical in-house default experience, the Bank recognised the need to link its internal credit risk assessments to the observed default experience of a third party.

  40. CASE STUDY • Only 12% of obligors were assessed correctly by the IRS • In 40% of cases the IRS outcome was more than 2 notches away. • In 10% of cases the IRS outcome was more than 6 notches away. • 3% of cases had outcomes more than 12 notches away. Performance of IRS on rated obligors: 95% within +/- 2 notches Performance on non rated obligors - comparing the outcome of the IRS against the opinion of Standard & Poor's analysts:

  41. SUMMARY • To appropriately employ observed default rates for agency ratings, institutions need to have strongly aligned methodologies. • Creating an aligned methodology is more complex than what S&P Risk Solutions observes banks and others sometimes anticipate. • In particular, using agency default data in a robust manner is challenging since significant elements of the required detail are not in the public domain.

  42. Analytic services and products provided by Standard & Poor’s are the result of separate activities designed to preserve the independence and objectivity of each analytic process. Standard & Poor’s has established policies and procedures to maintain the confidentiality of non-public information received during each analytic process.

More Related