Explore tens of thousands of sets crafted by our community.
Model Explainability and Interpretability
10
Flashcards
0/10
Anchors
It provides high-precision rules, where changes to other features do not affect the prediction.
Partial Dependence Plots (PDPs)
They show the marginal effect of a feature on the predicted outcome of a model.
Global Surrogate Models
A globally interpretable model is trained to approximate the predictions of the black-box model.
Feature Importance
This technique ranks the features based on their importance in improving a model's prediction.
Decision Tree Surrogates
A decision tree is used to approximate the behavior of the black-box model, making it interpretable.
LIME (Local Interpretable Model-agnostic Explanations)
It provides local explanations by approximating the model locally with an interpretable one.
SHAP (SHapley Additive exPlanations)
It uses cooperative game theory to attribute the output change to each input feature's contribution fairly.
Counterfactual Explanations
They describe the smallest change to the input features that would change the prediction outcome.
Individual Conditional Expectation (ICE) Plots
They plot the relationship between the feature and the prediction for individual instances.
Local Feature Importance
Identifies the contribution of each feature to the prediction of a single instance.
© Hypatia.Tech. 2024 All rights reserved.