1 / 39

City University of Hong Kong Professional Seminar 17 March 2006

City University of Hong Kong Professional Seminar 17 March 2006 Part II: Introduction to IRB Approaches and Internal Rating Systems under Basel II Dr Michael Taylor Hong Kong Monetary Authority. Outline. Background: Quantitative Concepts of IRB What are internal ratings systems?

haracha
Download Presentation

City University of Hong Kong Professional Seminar 17 March 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. City University of Hong Kong Professional Seminar 17 March 2006 Part II: Introduction to IRB Approaches and Internal Rating Systems under Basel II Dr Michael Taylor Hong Kong Monetary Authority

  2. Outline • Background: Quantitative Concepts of IRB • What are internal ratings systems? • What is validation? • HKMA Approach to Validation

  3. Quantitative Concepts of IRB:Some Background • Rating systems have been used by the industry for almost 50 years in making credit decisions and managing credit risk • In the past two decades, the industry has put a lot of effort into enhancing the application of rating systems, in particular by linking the outputs of rating systems (i.e. rating grades or credit scores) to banks’ profits and losses and to the optimal use of capital • e.g. to maximise the profit given an acceptable level of risk • This involves the application of theories in statistics, economics and finance • IRB reflects the essence of the evolution in the past 20 years in measuring credit risk

  4. Quantitative Concepts of IRB:Expected Loss & Unexpected Loss • “Expected loss”, as its name suggests, is expected. Under IRB, an AI should cover this byprovisioning • “Unexpected loss” is the loss from unexpected unfavourable situations. Under IRB, an AI should cover this bycapital

  5. Quantitative Concepts of IRB:Expected Loss & Unexpected Loss Shaded area is equal to 1 - “confidence level” Frequency Potential loss rate Unexpected loss: covered by capital Expectedloss: covered by provisioning Unexpected loss: NOT covered • In IRB , the confidence level is set at 99.9%, meaning that there is a 0.1% chance (once in 1000 years) that an AI’s capital would fail to absorb the unexpected loss and becomesinsolvent

  6. Quantitative Concepts of IRB:Risk Components • Under IRB Approach, expected loss and the covered portion of unexpected loss are calculated by using estimates of “risk components” as inputs to “risk-weight functions” • The risk components are: • Probability of default (“PD”) • How likely will a borrower default in the coming 12 months? • Loss given default (“LGD”) • How much will the AI lose, as a percentage of EAD, if the borrower defaults? • Exposure at default (“EAD”) • How much will the borrower owe the AI when he defaults? • Effective maturity (“M”) • The weighted-average timing of the AI in receiving cash flows from a facility

  7. What is a Rating System? Rating grades A B C D E F G default • A rating system is one by which borrowers/facilities are systematically assigned to (grouped into) rating grades according to the credit risk characteristics (rating criteria or risk factors) of the borrowers/facilities

  8. Rating grades A B C D E F G default PD 1% 3%5%10%20%40%80%100% What is a Rating System? • Homogeneity • Borrowers/Facilities assigned to the same rating grade should share similar risk characteristics • Risk differentiation • Borrowers/Facilities assigned to different rating grades should have different risk characteristics • Risk quantification • Risk component(s) is/are estimated for each rating grade

  9. Types of Rating System:Expert Judgement-based System • Ratings are assigned subjectively by experienced credit officers, usually following some guidelines - this is the most classic form of expert judgement-based system • The major problem of an expert judgement-based system is that it is not transparent: the rating assignment process is inside the mind of credit officers and may result in inconsistency amongst credit officers and over time for the same officer • Usually expert judgement-systems are used for portfolios with scarce default events (e.g. sovereign)

  10. Risk factors: industry trend, economic outlook, management quality, ... A B C D E F G Types of Rating System:Expert Judgement-based System

  11. Types of Rating System:Model-based System • Rating assignment is based on objective risk factors (e.g. income, financial ratios), with these factors and their relative importance being determined by statistical analysis, and/or economic and finance theory - the pure form of model-based system • The rating assignment process is mechanical and has little room for manipulations by judgements • Transparent, but rigid and subject to model risk • Model-based system can be applied to various types of exposures • Generally, model-based systems are more applicable to exposures with abundant default data. But there are also some models designed for exposures with few default events, especially those based on economic and finance theory (usually referred to as “structural models”) • Risk components can be directly estimated from certain types of model-based systems

  12. A Risk factors: Financial ratios, GDP growth, interest rate, ... B C D E F G Types of Rating System:Model-based System

  13. Spectrum of Rating Systems Classic expert judgement-based system Pure model-based system Expert-derived models Constrained judgement Model-based system with judgemental overrides Expert judgement-based system with quantitative guidelines Hybrid system - the most commonly-used in the industry Types of Rating System:Hybrid Rating System • Rating systems that uses both expert judgements and statistical modelling techniques - the most commonly-used rating systems in industry

  14. RISK FACTORS SCORE RELATIVE IMPORTANCE Subjective factors 1. Management 32% Strong 100 Weak 0 2. Entry barrier 25% High 100 Low 0 Objective factors 3. Gearing 34.5% <=50% 100 > 50% 0 4. Earnings growth 8.5% >= 10% 100 < 10% 0 Relative importance determined by models Risk factors & scores determined by judgements Types of Rating System:An Example

  15. (95,100] (70,95](60,70](50,60](40,50](20,40] [0,20] Score ranges A B C D E F G Rating grades Types of Rating System:An Example • The range of scores would lie between “0” (i.e. weak management, low entry barrier, gearing >50% and earnings growth <10%) to “100” (i.e. strong management, high entry barrier, gearing <=50% and earnings growth >=10%) • Assume the AI maps score ranges to rating grades: • e.g. if a borrower has a strong management, the industry has low entry barrier, the gearing is 80%, and earnings growth is 30%, then it would have credit score: 10032% + 025% + 034.5% + 1008.5% = 40.5 and the borrower would be assigned to rating grade E

  16. Quantification of a Rating System • FIRB Approach for corporate, bank & sovereign exposures: • an AI estimates PD for each borrower rating • LGD, EAD and M are prescribed by the HKMA (supervisory estimates) • AIRB Approach for corporate, bank & sovereign exposures: • an AI estimates PD for each borrower rating • it also estimates LGD for each facility rating • it also estimates EAD for each facility type • it also calculates M according to rules prescribed by the HKMA • For retail exposures: • an AI estimates PD, LGD and EAD for each pool

  17. Quantification of a Rating System:PD of Corporate, Bank & Sovereign Exposures • For FIRB or AIRB Approach for corporate, bank & sovereign exposures,3 methods can be used to estimate the PD of a borrower rating 1. Internal default experience 2. Mapping to external data 3. Statistical default models

  18. Quantification of a Rating System:PD of Corporate, Bank & Sovereign Exposures 1. Internal default experience: e.g. in the past 5 years, annual default rates of borrowers assigned to rating grade D were 10%, 12%, 9%, 8% and 11% respectively. PD of rating grade D for this year can be estimated as the simple average of these default rates, i.e.: (10% + 12% + 9% + 8% + 11%)  5 = 10%

  19. Quantification of a Rating System:PD of Corporate, Bank & Sovereign Exposures 2. Mapping to external data: e.g. By comparing the rating criteria of its internal rating system with those of the Moody’s, an AI concludes that 50% of the borrowers assigned to its rating grade B would have Moody’s ratings “Baa1”, 25% “A3” and 25% “Ba1”. In the past 5 years, average annual default rates of these Moody’s ratings were 3%, 2% and 4% respectively. The AI’s rating grade B can be estimated as: 50%  3% + 25%  2% + 25%  4% = 3% There are many types of mapping methodologies

  20. Quantification of a Rating System:PD of Corporate, Bank & Sovereign Exposures 3. Statistical default models: e.g. an AI uses a model-based rating system, under which PD is estimated for each borrower. There are 3 borrowers assigned to rating grade C, with PD estimated to be 4.5%, 5% and 5.5% respectively by the model. PD of rating grade C can be estimated as the simple average of the individual PDs of these borrowers, i.e.: (4.5% + 5% + 5.5%)  3 = 5% 5% will be used for all the 3 borrowers for CAR purpose, regardless of the individual PDs generated from the model

  21. What is Validation? • Basel definition: “encompasses a range of processes and activities that contribute to an assessment of whether ratings adequately differentiate risk, and whether estimates of risk components appropriately characterise the relevant aspects of risk” • AI’s responsibility to demonstrate its rating system meets minimum requirements • Review of an AI’s validation process a major part of the IRB recognition process

  22. Six Principles of the Validation Subgroup • Six Principles of the Validation Subgroup of the Basel Accord Implementation Group (i) Validation is fundamentally about assessing the predictive ability of a bank’s risk estimates and the use of ratings in credit processes (ii) The bank has primary responsibility for validation (iii) Validation is an iterative process (iv) There is no single validation method (v) Validation should encompass both quantitative and qualitative elements (vi) Validation processes and outcomes should be subject to independent review

  23. HKMA Approach to Validation (1) • Closely aligned with the 6 principles • AI conducts its own internal validation of the rating system, estimates of risk components & the risk ratings generation processes • Internal validation clearly documented & shared with HKMA • Individuals involved in validation must have necessary skills & knowledge and independence • No universal validation tool

  24. HKMA Approach to Validation (2) • No industry “best practice” standard on validation • Quantitative techniques very diverse, portfolio specific, and still evolving • Setting prescriptive quantitative standards & benchmarks for IRB systems could stifle innovation • Principles-based approaches by other supervisors • Guidance from Basel & participation in AIG V Subgroup • Views of external consultants & industry experts

  25. HKMA Approach to Validation (3) • Qualitative and Quantitative elements. • Qual. - processes, procedures & controls Corporate governance & oversight, independence, transparency, accountability, use of internal ratings, internal & external audit, use of external vendor models • Quant. - generally accepted techniques Data quality, accuracy of PDs, LGDs & EADs, model logic & conceptual soundness, estimation & validation techniques, issues on LDPs, back-testing, benchmarking

  26. Corporate Governance & Oversight • Board & senior management involvement • Understanding of HKMA requirements • Understanding & approval of key aspects of IRB system • Ensures adequate resources and clearly defines responsibilities • Ensures adequate training • Integrates IRB systems with policies, procedures, systems, controls • Tracks differences between policies & actual practice (e.g. exceptions/overrides) • Quarterly MIS on rating system performance & regular internal review • Receives regular reports on internal ratings (e.g risk profile of the AI, performance & predictive ability of internal rating system, changes in regulatory & economic capital, results of independent validation)

  27. Independent Rating Approval Process • General rule that approval of ratings & transactions should be separate from sales & marketing • Independent & separate functional reporting lines for rating “assignors” & rating “approvers” (e.g. credit officers, with well-defined performance measures) • Where ratings are assigned & approved within sales & marketing • mitigate the inherent conflict of interest with compensating controls (e.g. limited credit limits, independent post-approval review of ratings, more frequent internal audit coverage) • Where rating assignment or approval process is automated, verify accuracy & completeness of data inputs

  28. Independent Review of IRB System & Risk Quantification • Annual Review • Reviews conducted internally or by external experts • Functional independence • Should encompass all aspects of the process generating the risk estimates & usage • Compliance with established policies & procedures • Quantification process & accuracy of risk component estimates • Model development, use & validation • Adequacy of data systems & controls • Adequacy of staff skills & experience • Identify weakness, make recommendations & take corrective actions • Significant findings reported to senior management & the Board

  29. Transparency & Accountability • Transparency • Enable third parties to understand the design, operations & accuracy of a rating system & to evaluate whether it is performing as intended • An ongoing requirement: update documentation when there are changes • Achieved through documentation • Expert judgement-based vs. Model-based rating system • Accountability • Identify individuals or parties responsible for rating accuracy & rating system performance • Inventory of models & accountability chart of roles of parties • Establish performance standards • Senior individual to take responsibility for overall performance

  30. Use of Internal Ratings • The IRS & risk estimates should have substantial influence on decision-making & actions: • Credit approval & pricing,, individual & portfolio limit setting • Portfolio monitoring & determining provisioning • Analysis & reporting of credit risk information • Modelling & management of economic capital • Assessment of total credit risk capital requirements under the AIs’ CAAP • Formulating business strategies & assessment of risk appetite • Assessment of profitability & performance, and determining performance-related remuneration • Other aspects (e.g. AIs’ infrastructure such as IT, skills & resources and organisational structure)

  31. Data Quality Management oversight & control • Accuracy, completeness & appropriateness Data quality assessment programme & internal audit IT infrastructure Data architecture Staff competency Data processing Storage, retrieval & deletion Data collection IRB data External & pooled data Reconciliation Use of statistical techniques A/C data

  32. Quantitative Requirements • Accuracy of PD, LGD, EAD • Discriminatory power and calibration • Benchmarking • Stress testing

  33. Validation of a Rating System:Back-testing • Back-testing is the direct comparison between the risk component estimates with the realised figures, e.g. PD against default rate of a borrower grade (or pool for retail) • In practice, estimates will never be exactly the same as realised figures. The question is whether the deviation is acceptable, especially when the estimates are smaller than the realised figures (i.e. underestimation) • In general, statistical hypothesis testing can be applied: Null hypothesis (H0):The estimate of the risk component is correct Alternative hypothesis (H1): The risk component is underestimated • To use the hypothesis testing technique, a confidence level needs to be set and a probability distribution of the risk component needs to be defined.

  34. Validation of a Rating System:Benchmarking • Benchmarking is the comparison of an AI’s risk component estimates with those of a third party such as estimates by rating agencies • For PD, external benchmarks are generally most useful where backtesting is difficult • For LGD and EAD, as well as PD of small-sized borrowers (e.g. individuals and SMEs), external benchmarks may not be available • LGD and EAD depend heavily on individual AIs’ recovery and credit monitoring policies, and therefore it is possible for there to be big differences of internal estimates from the benchmarks, even for the same type of facilities

  35. Validation of a Rating System:Stability Analysis • Even if a rating system performs well under certain situations or for certain types of borrowers/facilities, it may not do so in other situations or with other types of borrowers/facilities • Stability analysis examines whether a rating system and/or the risk component estimates remain valid under different situations or for different types of borrowers/facilities. It involves asking questions like: • Would the back-testing results remain satisfactory during economic boom as well as recession? • How would distribution of borrowers/facilities amongst rating grades and estimates of risk components change if certain assumptions are modified (e.g. discount rates in workout LGD)? • What would be the risk component estimates if only a sub-sample of data are used in quantification?

  36. Validation of a Rating System:Discriminatory Power • Discriminatory power is about the “rank order” of borrowers. It assesses the ability of a rating system to differentiate “bad” borrowers (i.e. those going to default) from “good” borrowers (i.e. those not going to default). • Many quantitative techniques can be used to assess discriminatory power: • Accuracy Ratio • Receiver Operating Characteristic Measure • Pietra Index • Bayesian Error Rate • Conditional Information Entropy Ratio • Information Value • Brier Score • Divergence

  37. Frequency “Bad” borrowers “Good” borrowers Rating score Validation of a Rating System:Discriminatory Power • Generally speaking, all these techniques are to measure the difference between the distribution of the “good” borrowers and that of the “bad” borrowers in relation to risk characteristics, e.g. credit scores, rating grades, income

  38. Validation of a Rating System:Discriminatory Power • For a perfect rating system, the distribution of “bad” borrowers would not overlap with that of “good” borrowers • Discriminatory power analysis can be applied to borrower ratings of corporate, bank and sovereign exposures • For retail exposures, discriminatory power can be assessed for individual rating criteria that are used in segmentation • As with back-testing, it is difficult to set a “passing mark” for a rating system’s discriminatory power

  39. Conclusion • Basel II’s most important innovation is to rely on internal rating systems for regulatory capital purposes • But regulators need some assurance that these systems are fit for the purpose • “Validation” is key to this assurance

More Related