1 / 19

Explainable AI

Explainable AI is the machines we can trust!! XAI gives us a better understanding of how AI thinks and provides a decision.

dineshv23
Download Presentation

Explainable AI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EXPLAINABLE AI

  2. Third Wave of AI Symbolic AI Logic rules represent Knowledge No learning capability and poor handling of uncertainty Statistical AI Statistical models for specific domains training on big data. No contextual capability and minimal explainability. Explainable AI Systems construct explanatory models. Systems learn and reason with new tasts and situations. Factors driving repid advancement of AI GPUs, On-chip Neural Network Data Cloud New availability Infrastructure Algorithms 2

  3. What is XAI? It is a AI system that explains their decision making which is referred as Explainable AI or XAI. The goal of XAI is to provide verifiable explanations of how machine learning systems makes decisions and let humans to be in the loop. There are two ways to provide explainable AI: Use Machine learning approaches that are inherently explainable such as decision trees, knowledge graphs and similarity models. Develop new approaches to explain complicated neural networks. 3

  4. What is Explainable AI? Black Box AI Confusion with today’s AI Black Box Decision, Why did you do that? Why did you not do that? When do you succeed or fail? How do I correct an error? Recommendation Black-Box AI AI Data product Explainable AI Clear & transparent Predictions Feedback I understand why I understand why not I know why you succeed or fail I understand, so I trust you Decision Explainable AI Explainable AI Product Explanation 4

  5. Black-box AI Creates Confusion and Doubt Can I trust our AI decisions? Why i am getting this Decision? Business Owner </> How do I answer this customer complaint? </> How do Imonitor and Debug this model? Black-box AI Poor Decision IT & Operations Is this the best model that can be built? How can i get a better decision? Data Sciebtists Are these AI system decisions fair? Internal Audit, Regulators 5

  6. Why Do We Need It? Artificial Intelligence are increasingly implemented in our everyday lives to assist humans in making decisions. These trivial decisions can vary from a lifestyle choices to more complex decisions such as loan approvals, investments, court decisions and selection of job candidates. Many AI algorithms are a Blackbox that is not transparent. This leads to trustability concerns. In order to trust these  systems, humans want accountability and explanation.  6

  7. Why Do We Need It? While the machine learning systems deployed in 2008 were mostly  within the products of tech-first companies (i.e. Google, YouTube), the false prediction  would result in the wrong recommendation to the application user. But, when it is being deployed in other industries such as military, healthcare, finance, it would lead to adverse consequences affecting many lives. Thus, we create AI systems that explain their decision making. 7

  8. AI System We are entering a new age of AI applications Machine learning is the core technology Machine learning models are opaque, non-intuitive, and difficult for people to understand DoD and non-DoD Application Transportation Security Medicine Finance Legal Military Why did you do that? When do you fail? Why not something else? When can I trust you? When do you succeed? How do I correct an error? User 8

  9. Process of XAI The significant enabler of explainable AI is Interpretability. It collaborates between human and artificial intelligence. Interpretability is a degree to which a human can understand the cause of a decision. It strengthens trust and transparency, explains decisions, fulfil regulatory requirements, and improve models. The stages of AI explainability is categorised into  pre-modelling, explainable modelling and post-modelling. They focus on explainability at the dataset stage and during model development. 9

  10. Explainability By Design For AI Products Model Debugging Model visualization Feedback Loop Train Model Diagnostics Root Cause Analytics Debug Model Evaluation Compliance Testing QA Explainable AI Performance monitoring Fairness monitoring Monitor Model lanuch sign of model relese mgmt Deploy A/B Test model comparison Cohort analysis Predict Explainable Dacision API Support 10

  11. Explainability Approaches The popular  Local Interpretable Model-agnostic Explanations (LIME)  approach provides explanation for an instance prediction of a model in terms of input features, the explanation family, etc. Post-hoc Explainability approach of AI Model creates. Individual prediction explanations  with input features, influential concepts, local decision rules. The global prediction explanations with partial dependence plots, global feature importance, global decision rules. The build an interpretability model approach creates. Logistic regressions, decision trees, generalised additive models(GAMs). 11

  12. Why Explainability: Improve ML Model Standard ML Interpretable ML model/data improvement data data Human inspection ML ML Interpretability model model predictions verified predictions generalization error generalization error + human experience 12

  13. Explanation Targets The target specifies the object of an explainability method which varies in type, scope, and complexity. The type of explanation target is often determined according to the  role-specific goals of end users.  There are two types of targets: inside vs. outside, which can also be referred as mechanistic vs. Functional. AI experts require a mechanistic explanation of some component  inside a model to understand how layers of a deep network respond to input data in order to debug or validate the model. 13

  14. Explanation Targets In contrast, non-experts often require a functional explanation to  understand how some output outside a model is produced. In addition, targets can vary in their scope. The outside-type targets are typically some form of model prediction. They can be either local or global explanations. The inside-type targets also vary depending on the architecture of the underlying model. They can either be a single neuron, or layers in a neural network. 14

  15. Explanation Drivers The most common type of drivers are input features to an AI model. Explaining an image classifier predictions in terms of individual input pixels can result in explanations that are too noisy, too expensive to compute, and more importantly, difficult to interpret.   Alternatively, we can rely on a more interpretable representation of input features known as super-pixels in the case of image classifier prediction. All factors that have an impact on the development of an AI model can be termed as explanation drivers. 15

  16. Flu sneeze weight headache on fatigue age Explainer (LIME) Data and Prediction Model sneeze headache no fatigue Human makes decision Explanation 16

  17. Explanation Families A post-hoc explanation aims at communicating some information about how a target is caused by drivers for a given AI model.   An explanation family must be chosen such that its information content is easily interpretable by the user. Importance scores - The individual importance scores are meant to communicate the relative contribution made by each explanation driver to a given target. Decision rules - Decision trees is where outcome represents prediction of an AI model and condition is a simple function defined over input features. 17

  18. Explanation Families Decision trees - Unlike decision rules, they are structured as a graph where internal nodes represent conditional tests on input features and leaf nodes represent model outcomes. In a decision tree each input example can satisfy only one path from the root node to a leaf node.  Dependency plots - They aim at communicating how a target’s value varies as a given explanation drivers’ value varies, in other words, how a target’s value depends on a driver’s value. 18

  19. Conclusion To explain pre-developed AI models, multiple methods have been proposed.   They vary in terms of their Explanation target, Explanation drivers,  Explanation family and Extraction mechanism.  XAI is an active research area with new, improved methods being  developed consistently.  Such diversity of choices can make it challenging of XAI experts to adopt the most suitable approach for a given application. This challenge is addressed by presenting a snapshot of the most notable post-modelling explainability methods.  19

More Related