1 / 8

BLACK BOX IN AI AND DATA- A CLOSER LOOK INTO EXPLAINABLE AI

Want to know what black box in AI and data infer? Gear up to explore an in-depth revelation of explainable AI and what it offers to the data science industry. bit.ly/3D0U8CL

Kristine22
Download Presentation

BLACK BOX IN AI AND DATA- A CLOSER LOOK INTO EXPLAINABLE AI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A CLOSER LOOK INTO EXPLAINABLE AI www.usdsi.org © Copyright 2024. United States Data Science Institute. All Rights Reserved

  2. AI systems whose internal workings, decision-making workflows, and contributing factors are not visible or remain unknown to human users are Black Box AI systems. The lack of transparency makes it hard to understand or explain how the system’s underlying model arrives at its conclusions. The black box in AI lacks a proper explanation about what went into deducing the result. Deep learning has invoked tremendous progress in automated image analysis. Before that, image analysis was commonly performed using systems fully designed by human domain experts. For example, such an image analysis system could consist of a statistical classifier that uses handcrafted properties of an image (features) to perform a certain task. White box AI, aka Explainable AI (XAI) or glass box AI, is strikingly the opposite of black box AI. It is an AI system with transparent inner workings. Users understand how the AI takes in data, its mechanism and processes, and the way it concludes. Let us understand XAI in depth to reveal its possible capabilities and probable implications in machine learning. UNDERSTANDING EXPLAINABLE AI Explainable AI is used to describe an AI model, its expected impact, and potential biases. It helps characterize model accuracy, fairness, transparency, and outcomes in AI-powered decision-making. Explainable AI is crucial for an organization to build trust and confidence when putting AI models into production. BENEFITS Better decision-making by understanding how to influence predicted outcomes. The predictive model will generate likely outcomes regarding customer churn based on your data. Faster AI optimizationby monitoring and evaluating your models. The transparency into which model performs best, the key drivers, and how accurate the model are revealed. Raise trust and reduce bias in your AI systems by being able to check models for fairness and accuracy. XAI explanations show the patterns your model found in your data. Increase adoption of AI systems as your organization, customers, and partners gain more understanding of and trust in your ML and AutoML systems. Ensure regulatory compliance as the reasoning behind your AI-based decisions can be audited to ensure conformity with the growing slate of laws and regulations. APPROACHES There is no single, optimal way to explain the outputs of a machine learning or AI algorithm. Three main approaches are to consider: global vs local, direct vs. post hoc, and data vs. model. Your choice of approach will depend on the requirements of your ML pipeline and who is consuming the explanations (a data scientist, regulator, or business decision-maker). www.usdsi.org © Copyright 2024. United States Data Science Institute. All Rights Reserved

  3. local direct explanation Evaluate test sample Global direct explanation prediction Explainable model product explainable model Generate post HOC explanation train ML ready dataset no explanation Interpretable model Black box model Local post HOC explanation generate explanation evaluate test sample prediction Image suggestion: Qlik.com GLOBAL VS LOCAL REFERS TO THE SCOPE OF THE EXPLANATION Global XAI models Provide a high-level understanding of how your AI model is making predictions. They typically summarize relationships between input features and predictions in a high-level, abstract way. Local models Provide specific, instance-level explanations for individual predictions. They show the exact contribution of each feature for a particular prediction. www.usdsi.org © Copyright 2024. United States Data Science Institute. All Rights Reserved ©2024. United States Data Science Institute (USDSI®). All Rights Reserved

  4. DIRECT VS POST HOC REFERS TO THE WAY YOUR MODEL IS DESIGNED TO PROVIDE EXPLANATIONS DIRECT XAI MODELS (“WHITE BOX”) are designed to produce interpretable predictions from the outset. You choose your model’s architecture, loss functions, and regularization terms with interpretability in mind. For example, decision trees or logistic regressions are types of direct models, because their structure provides a clear and interpretable explanation of the model’s predictions. POST HOC MODELS (“BLACK BOX”) were not originally designed to be interpretable. Still, an explanation can be generated after the fact, often using a separate tool or process, rather than being built into the model itself. For example, a neural network is a type of post hoc model, because it can produce predictions that are difficult to interpret. DATA VS MODEL REFERS TO THE TYPE OF EXPLANATION BEING PROVIDED Data XAI models Provide an explanation based on the ML-ready input data and the relationships between the features. This type of explanation focuses on the relationship between the input features and the predictions, and how changes in the features lead to changes in the predictions. Model explainable AI models Provide an explanation based on the internal workings of your model. They focus on how the model processes the input data and how the internal representations are used to make predictions. www.usdsi.org © Copyright 2024. United States Data Science Institute. All Rights Reserved

  5. CHALLENGES Having your XAI provide explanations that are both accurate and easy to understand; yet involves many challenges. XAI models can be: Complex and difficult to understand, even for data scientists and machine learning experts. Challenging to verify the correctness and completeness of XAI explanations you receive. While first-order insights may be relatively simple, the audit trail becomes harder to follow as your AI engine interpolates and interpolates your data. Computationally intensive, which can make it hard for you to scale for large AI datasets and real-world applications. Unable to provide explanations that generalize well across different situations and contexts. Requiring a trade-off between explainability and accuracy as your XAI models may sacrifice some level of accuracy to increase transparency and explainability. Difficult to integrate with your existing AI systems, requiring significant changes to existing processes and workflows. www.usdsi.org © Copyright 2024. United States Data Science Institute. All Rights Reserved

  6. BEST PRACTICES Set up a cross-functional AI governance committee that includes not only technical experts, but also business, legal, and risk leaders. This committee will guide your AI development teams by defining the organizational framework for XAI, and determining the right tools for your needs; set standards as per different use cases and associated risk levels. Invest in the appropriate talent and set of tools to implement XAI in your organization and stay up to date with this rapidly evolving space. Your choice of using custom, off-the-shelf, or open-source tools will depend on your short- and long-term needs. Clearly define your use case or problem and the decision-making context in which your XAI will be used. This helps ensure that you understand the unique set of risks and legal requirements for each model. Consider your audience for your XAI system and what level of explanation they will need to understand it. Choose appropriate XAI techniques for the problem and use case you have defined, such as feature importance, model-agnostic methods, or model-specific methods. Evaluate your XAI models using metrics such as accuracy, transparency, and consistency to ensure they are providing accurate and trustworthy explanations. This may require that you weigh trade-offs between explainability and accuracy. Test your XAI models for bias to ensure that they are fair and non-discriminatory. Continuously monitor and update your XAI models as needed to maintain their accuracy, transparency, and fairness. www.usdsi.org © Copyright 2024. United States Data Science Institute. All Rights Reserved

  7. Lastly, you should ensure that your XAI models adhere to the four principles of explainable artificial intelligence as defined by the National Institute of Standards (NIST): EXPLANATION Accompanying evidence or reasons are provided for all outputs by the system. UNDERSTANDABLE The system's explanations are tailored to be comprehended by each user. 2 1 ACCURACY The output's generation process is accurately reflected in the explanation. KNOWLEDGE LIMITATIONS The system only functions within its designed parameters or when it attains a confident level in its output. 3 4 TECHNIQUES The specific XAI techniques you employ depend on your problem, the type of AI model you use, and your audience for the explanation. Below are the main XAI techniques used to produce explanations that are both accurate and easy to understand. FEATURE IMPORTANCE This technique highlights the most important input features that contribute to a particular AI decision. MODEL-AGNOSTIC METHODS These techniques provide explanations that are not specific to any AI model, and can be applied to any black-box model. Examples include saliency maps and LIME (Local Interpretable Model-agnostic Explanations). MODEL-SPECIFIC METHODS These techniques provide explanations that are specific to a particular AI model, such as decision trees and rule-based models. COUNTERFACTUAL EXPLANATIONS This technique provides explanations for AI decisions by showing what would have to change in the input data for a different decision to be made. VISUALIZATION Data visualization tools such as graphs, heatmaps, and interactive interfaces can be used to provide clear and intuitive explanations for AI decisions. Building AI trust in data is critical as the technology becomes more impactful across domains. Trust brings in explainability, governance, information security, and human centricity. These will enable Artificial intelligence and its human users in data science to interact in harmony; making work smooth and delivering tangible value. www.usdsi.org © Copyright 2024. United States Data Science Institute. All Rights Reserved

  8. GET STARTED ON YOUR PROFESSIONAL DATA SCIENCE JOURNEY © Copyright 2024. United States Data Science Institute. All Rights Reserved

More Related