LESSON 4.3.: XAI TOOLS AND METHODS


Types of XAI Techniques
There are various approaches to creating explainable AI, each suited to different models and application contexts. Here are some common types: 

  • Model-Specific vs. Model-Agnostic: Model-specific methods apply only to certain types of models (like decision trees), while model-agnostic methods (like LIME and SHAP) work across any model. 
  • Global vs. Local Explanations: Global explanations provide insight into an entire model’s behavior, while local explanations clarify individual predictions. 


Key XAI Tools and Methods

  • Local Interpretable Model-agnostic Explanations (LIME) is a widely used XAI tool that offers local explanations by building simpler, interpretable models (like linear models) around individual predictions. By providing explanations on a case-by-case basis, LIME helps users understand why a specific prediction was made, making it particularly useful for complex, “black-box” models.
    Example Use Case: In healthcare, LIME can help doctors understand why an AI model flagged a patient as high-risk for a certain disease by highlighting the features most relevant to that prediction.
     

  • SHapley Additive exPlanations (SHAP) values, derived from cooperative game theory, provide insights into the contribution of each feature to a model’s prediction. SHAP is particularly valued for its consistency and theoretical foundation, offering a unified measure of feature importance for both local and global explanations.
    Example Use Case: In finance, SHAP values can explain why a customer was denied a loan by revealing which factors (e.g., income, credit score) contributed most to the model’s decision. 

  • What-If Tool, developed by Google, allows users to interact with ML models and see how changes to input features affect predictions. This tool is useful for “what-if” scenarios that can reveal potential biases or vulnerabilities in a model’s decisions.