SlideShare a Scribd company logo
1 of 40
A Unified Approach
to Interpreting
Model Predictions
Rama Irsheidat
Scott M. Lundberg. et al.
Introduction SHAP
Additive Feature
Attribution Methods
Simple Properties
Uniquely Determine
Additive Feature
Attributions
02
01
04
03
TABLE OF CONTENTS
Computational and
User Study
Experiments
05
Conclusion
06
Why do we care so
much about
explainability in ML ?
Introduction
Example
Introduction
Example
Introduction
Understanding why a model makes a certain prediction can be as crucial as
the prediction’s accuracy in many applications
Introduction
The reason this problem exists is with in Large Datasets complex models tend
to be very accurate but hard to interpret
Introduction
Focus on explaining individual predictions one at a time.
Introduction
We are replacing the input to a summation that you would get in a linear
model with something that represents the importance of that feature in the
complicated model.
Interprets individual
model predictions based
on locally approximating
the model around a given
prediction.
Interprets the predictions
of deep networks.
Recursive prediction
explanation method for
deep learning.
Shapley regression
values
Quantitative
Input Influence
Shapley sampling
values
LIME
DeepLIFT
Layer-Wise
Relevance
Propagation
Feature importance for linear
models. This method requires
retraining the model on all
features. It assigns an
importance value to each
feature that represents the
effect on the model prediction
of including that feature.
Explaining any model by Applying sampling
approximations to Equation in Shapley reg. values ,
and Approximating the effect of removing a variable
from the model by integrating over samples from the
training dataset.
Proposes a sampling
approximation to Shapley
values that is nearly
identical to Shapley
sampling values.
Interprets individual
model predictions based
on locally approximating
the model around a given
prediction. Interprets the predictions
of deep networks.
Recursive prediction
explanation method for
deep learning.
Shapley regression
values
Quantitative
Input Influence
Shapley sampling
values
LIME
DeepLIFT
Layer-Wise
Relevance
Propagation
Feature importances for linear
models in the presence of
multicollinearity. This method
requires retraining the model
on all features. It assigns an
importance value to each
feature that represents the
effect on the model prediction
of including that feature.
Explain any model by Applying sampling
approximations to Equation in Shapley reg. values ,
and Approximating the effect of removing a variable
from the model by integrating over samples from the
training dataset.
Proposes a sampling
approximation to Shapley
values that is nearly
identical to Shapley
sampling values.
Additive Feature Attribution Methods
Additive Feature Attribution Methods
Interprets individual
model predictions based
on locally approximating
the model around a given
prediction. Interprets the predictions
of deep networks.
Recursive prediction
explanation method for
deep learning.
Shapley regression
values
Quantitative
Input Influence
Shapley sampling
values
LIME
DeepLIFT
Layer-Wise
Relevance
Propagation
Feature importances for linear
models in the presence of
multicollinearity. This method
requires retraining the model
on all features. It assigns an
importance value to each
feature that represents the
effect on the model prediction
of including that feature.
Explain any model by Applying sampling
approximations to Equation in Shapley reg. values ,
and Approximating the effect of removing a variable
from the model by integrating over samples from the
training dataset.
Proposes a sampling
approximation to Shapley
values that is nearly
identical to Shapley
sampling values.
Have some better
theoretical grounding
but slower
computation
Additive Feature Attribution Methods
Interprets individual
model predictions based
on locally approximating
the model around a given
prediction. Interprets the predictions
of deep networks.
Recursive prediction
explanation method for
deep learning.
Shapley regression
values
Quantitative
Input Influence
Shapley sampling
values
LIME
DeepLIFT
Layer-Wise
Relevance
Propagation
Feature importances for linear
models in the presence of
multicollinearity. This method
requires retraining the model
on all features. It assigns an
importance value to each
feature that represents the
effect on the model prediction
of including that feature.
Explain any model by Applying sampling
approximations to Equation in Shapley reg. values ,
and Approximating the effect of removing a variable
from the model by integrating over samples from the
training dataset.
Proposes a sampling
approximation to Shapley
values that is nearly
identical to Shapley
sampling values.
Have faster
estimation but less
guarantees
Interprets individual
model predictions based
on locally approximating
the model around a given
prediction.
Interprets the predictions
of deep networks.
Recursive prediction
explanation method for
deep learning.
Shapley regression
values
Quantitative
Input Influence
Shapley sampling
values
LIME
DeepLIFT
Layer-Wise
Relevance
Propagation
Feature importances for linear
models in the presence of
multicollinearity. This method
requires retraining the model
on all features. It assigns an
importance value to each
feature that represents the
effect on the model prediction
of including that feature.
Explain any model by Applying sampling
approximations to Equation in Shapley reg. values ,
and Approximating the effect of removing a variable
from the model by integrating over samples from the
training dataset.
Proposes a sampling
approximation to Shapley
values that is nearly
identical to Shapley
sampling values.
SHAP
How should we define importance of each
feature (φi (f , x))
Base rate of loan rejection or how often do people get their loans denied on
average?
Why am I 55 percent ?
We have to explain is this
35 percent difference
here .
So how can we do this?
We should just take the expected value of the output of our model (Base
rate), then we can just introduce a term into that conditional expectation.
Fact that John's 20, his
risk jumps up by 15
percent.
We should just take the expected value of the output of our model (Base
rate), then we can just introduce a term into that conditional expectation.
A very risky profession
and that jumps the risk
up to 70 percent.
We should just take the expected value of the output of our model (Base
rate), then we can just introduce a term into that conditional expectation.
We should just take the expected value of the output of our model (Base
rate), then we can just introduce a term into that conditional expectation.
He made a ton of money
in the stock market last
year. So his capital gains
pushes him down to 55
percent.
We have to explain is this
35 percent difference
here . So how can we do
this?
We've basically divided up how
we got from here to here by
conditioning one at a time on
all the features until we've
conditioned on all of them.
Example:
We can't just pick a particular order and think that we've solved it so what
do we do here?
Meaningless
Simple Properties Uniquely Determine Additive Feature Attributions
Means the output of the
explanation model matches
the original model for the
prediction being explained.
Requires features missing
in the original input to have
no impact.
If you change the original model such that a
feature has a larger impact in every possible
ordering, then that input's attribution
(importance) should not decrease.
Shapley Properties
Local accuracy Missingness
Consistency
SHAP values arise from averaging the φi values
across all possible orderings.
Very painful to compute.
Find an approximate solution
1. Model-Agnostic Approximations
1.1 Shapley sampling values
2.1 Kernel SHAP (Linear LIME + Shapley values)
Linear LIME (uses a linear explanation model) fit a linear model locally to the original
model that we're trying to explain.
Shapley values are the only possible solution that satisfies Properties 1-3 – local
accuracy, missingness and consistency.
This means we can now
estimate the Shapley
values using linear
regression.
2. Model-Specific Approximations
2.1 Linear SHAP
For linear models, SHAP values can be approximated directly from the model’s weight
coefficients.
2.2 Low-Order SHAP
3.2 Max SHAP
Calculating the probability that each input will increase the maximum value over every
other input.
2. Model-Specific Approximations
2.4 Deep SHAP (DeepLIFT + Shapley values)
DeepLIFT
Recursive prediction explanation method for deep learning that satisfies local accuracy
and missingness, we know that Shapley values represent the attribution values that
satisfy consistency.
Adapting DeepLIFT to become a compositional approximation of SHAP values, leading to
Deep SHAP.
Computational and User Study
Experiments
1. Computational Efficiency
Comparing Shapley sampling, SHAP, and LIME on both dense and sparse decision tree
models illustrates both the improved sample efficiency of Kernel SHAP and that values
from LIME can differ significantly from SHAP values that satisfy local accuracy and
consistency.
2. Consistency with Human Intuition
(A) Attributions of sickness score (B) Attributions of profit among three men
Participants were asked to assign importance for the output (the sickness score or
money won) among the inputs (i.e., symptoms or players). We found a much stronger
agreement between human explanations and SHAP than with other methods.
3. Explaining Class Differences
Explaining the output of a convolutional network trained on the MNIST digit dataset.
(A) Red areas increase the probability of that class, and blue areas decrease the
probability . Masked removes pixels in order to go from 8 to 3.
(B) The change in log odds when masking over 20 random images supports the use of
better estimates of SHAP values.
Conclusion
• The growing tension between the accuracy and interpretability of model
predictions has motivated the development of methods that help users
interpret predictions.
• The SHAP framework identifies the class of additive feature importance
methods (which includes six previous methods) and shows there is a
unique solution in this class that adheres to desirable properties.
• We presented several different estimation methods for SHAP values, along
with proofs and experiments showing that these values are desirable.
THANKS!

More Related Content

What's hot

An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEUnified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
 
Interpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsInterpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsManojit Nandi
 
Overfitting & Underfitting
Overfitting & UnderfittingOverfitting & Underfitting
Overfitting & UnderfittingSOUMIT KAR
 
Explainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableExplainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
 
Machine Learning Interpretability
Machine Learning InterpretabilityMachine Learning Interpretability
Machine Learning Interpretabilityinovex GmbH
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Sri Ambati
 
Transfer learning-presentation
Transfer learning-presentationTransfer learning-presentation
Transfer learning-presentationBushra Jbawi
 
Multimodal Learning with Severely Missing Modality.pptx
Multimodal Learning with Severely Missing Modality.pptxMultimodal Learning with Severely Missing Modality.pptx
Multimodal Learning with Severely Missing Modality.pptxSangmin Woo
 
アンサンブル木モデル解釈のためのモデル簡略化法
アンサンブル木モデル解釈のためのモデル簡略化法アンサンブル木モデル解釈のためのモデル簡略化法
アンサンブル木モデル解釈のためのモデル簡略化法Satoshi Hara
 
NIP2015読み会「End-To-End Memory Networks」
NIP2015読み会「End-To-End Memory Networks」NIP2015読み会「End-To-End Memory Networks」
NIP2015読み会「End-To-End Memory Networks」Yuya Unno
 
Intro to deep learning
Intro to deep learning Intro to deep learning
Intro to deep learning David Voyles
 
Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Hojin Yang
 
A Transformer-based Framework for Multivariate Time Series Representation Lea...
A Transformer-based Framework for Multivariate Time Series Representation Lea...A Transformer-based Framework for Multivariate Time Series Representation Lea...
A Transformer-based Framework for Multivariate Time Series Representation Lea...harmonylab
 
Deep Generative Models
Deep Generative Models Deep Generative Models
Deep Generative Models Chia-Wen Cheng
 
210523 swin transformer v1.5
210523 swin transformer v1.5210523 swin transformer v1.5
210523 swin transformer v1.5taeseon ryu
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNNShuai Zhang
 
深層生成モデルと世界モデル
深層生成モデルと世界モデル深層生成モデルと世界モデル
深層生成モデルと世界モデルMasahiro Suzuki
 
Deep Dive into Hyperparameter Tuning
Deep Dive into Hyperparameter TuningDeep Dive into Hyperparameter Tuning
Deep Dive into Hyperparameter TuningShubhmay Potdar
 

What's hot (20)

An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEUnified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
 
Interpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsInterpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex models
 
Overfitting & Underfitting
Overfitting & UnderfittingOverfitting & Underfitting
Overfitting & Underfitting
 
Explainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableExplainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretable
 
Machine Learning Interpretability
Machine Learning InterpretabilityMachine Learning Interpretability
Machine Learning Interpretability
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
 
Transfer learning-presentation
Transfer learning-presentationTransfer learning-presentation
Transfer learning-presentation
 
Multimodal Learning with Severely Missing Modality.pptx
Multimodal Learning with Severely Missing Modality.pptxMultimodal Learning with Severely Missing Modality.pptx
Multimodal Learning with Severely Missing Modality.pptx
 
アンサンブル木モデル解釈のためのモデル簡略化法
アンサンブル木モデル解釈のためのモデル簡略化法アンサンブル木モデル解釈のためのモデル簡略化法
アンサンブル木モデル解釈のためのモデル簡略化法
 
NIP2015読み会「End-To-End Memory Networks」
NIP2015読み会「End-To-End Memory Networks」NIP2015読み会「End-To-End Memory Networks」
NIP2015読み会「End-To-End Memory Networks」
 
Intro to deep learning
Intro to deep learning Intro to deep learning
Intro to deep learning
 
Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Variational Autoencoder Tutorial
Variational Autoencoder Tutorial
 
Swin transformer
Swin transformerSwin transformer
Swin transformer
 
A Transformer-based Framework for Multivariate Time Series Representation Lea...
A Transformer-based Framework for Multivariate Time Series Representation Lea...A Transformer-based Framework for Multivariate Time Series Representation Lea...
A Transformer-based Framework for Multivariate Time Series Representation Lea...
 
Deep Generative Models
Deep Generative Models Deep Generative Models
Deep Generative Models
 
210523 swin transformer v1.5
210523 swin transformer v1.5210523 swin transformer v1.5
210523 swin transformer v1.5
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNN
 
深層生成モデルと世界モデル
深層生成モデルと世界モデル深層生成モデルと世界モデル
深層生成モデルと世界モデル
 
Deep Dive into Hyperparameter Tuning
Deep Dive into Hyperparameter TuningDeep Dive into Hyperparameter Tuning
Deep Dive into Hyperparameter Tuning
 

Similar to A Unified Approach to Interpreting Model Predictions (SHAP)

Understanding Black Box Models with Shapley Values
Understanding Black Box Models with Shapley ValuesUnderstanding Black Box Models with Shapley Values
Understanding Black Box Models with Shapley ValuesJonathan Bechtel
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable MLMayur Sand
 
Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Hayim Makabee
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Saurabh Kaushik
 
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
 
Machine learning Mind Map
Machine learning Mind MapMachine learning Mind Map
Machine learning Mind MapAshish Patel
 
Intepretable Machine Learning
Intepretable Machine LearningIntepretable Machine Learning
Intepretable Machine LearningAnkit Tewari
 
CounterFactual Explanations.pdf
CounterFactual Explanations.pdfCounterFactual Explanations.pdf
CounterFactual Explanations.pdfBong-Ho Lee
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019Rebecca Bilbro
 
WIA 2019 - Steering Model Selection with Visual Diagnostics
WIA 2019 - Steering Model Selection with Visual DiagnosticsWIA 2019 - Steering Model Selection with Visual Diagnostics
WIA 2019 - Steering Model Selection with Visual DiagnosticsWomen in Analytics Conference
 
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONijaia
 
Regularization_BY_MOHAMED_ESSAM.pptx
Regularization_BY_MOHAMED_ESSAM.pptxRegularization_BY_MOHAMED_ESSAM.pptx
Regularization_BY_MOHAMED_ESSAM.pptxMohamed Essam
 
Steering Model Selection with Visual Diagnostics
Steering Model Selection with Visual DiagnosticsSteering Model Selection with Visual Diagnostics
Steering Model Selection with Visual DiagnosticsMelissa Moody
 
MachineLlearning introduction
MachineLlearning introductionMachineLlearning introduction
MachineLlearning introductionThe IOT Academy
 
Machine Learning.pdf
Machine Learning.pdfMachine Learning.pdf
Machine Learning.pdfBeyaNasr1
 
Top 100+ Google Data Science Interview Questions.pdf
Top 100+ Google Data Science Interview Questions.pdfTop 100+ Google Data Science Interview Questions.pdf
Top 100+ Google Data Science Interview Questions.pdfDatacademy.ai
 
Sample_Subjective_Questions_Answers (1).pdf
Sample_Subjective_Questions_Answers (1).pdfSample_Subjective_Questions_Answers (1).pdf
Sample_Subjective_Questions_Answers (1).pdfAaryanArora10
 
Bengkel smartPLS 2011
Bengkel smartPLS 2011Bengkel smartPLS 2011
Bengkel smartPLS 2011Adi Ali
 

Similar to A Unified Approach to Interpreting Model Predictions (SHAP) (20)

Understanding Black Box Models with Shapley Values
Understanding Black Box Models with Shapley ValuesUnderstanding Black Box Models with Shapley Values
Understanding Black Box Models with Shapley Values
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable ML
 
Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)Explainable Machine Learning (Explainable ML)
Explainable Machine Learning (Explainable ML)
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective
 
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...
 
Machine learning Mind Map
Machine learning Mind MapMachine learning Mind Map
Machine learning Mind Map
 
Intepretable Machine Learning
Intepretable Machine LearningIntepretable Machine Learning
Intepretable Machine Learning
 
CounterFactual Explanations.pdf
CounterFactual Explanations.pdfCounterFactual Explanations.pdf
CounterFactual Explanations.pdf
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
 
WIA 2019 - Steering Model Selection with Visual Diagnostics
WIA 2019 - Steering Model Selection with Visual DiagnosticsWIA 2019 - Steering Model Selection with Visual Diagnostics
WIA 2019 - Steering Model Selection with Visual Diagnostics
 
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
 
Regularization_BY_MOHAMED_ESSAM.pptx
Regularization_BY_MOHAMED_ESSAM.pptxRegularization_BY_MOHAMED_ESSAM.pptx
Regularization_BY_MOHAMED_ESSAM.pptx
 
Steering Model Selection with Visual Diagnostics
Steering Model Selection with Visual DiagnosticsSteering Model Selection with Visual Diagnostics
Steering Model Selection with Visual Diagnostics
 
MachineLlearning introduction
MachineLlearning introductionMachineLlearning introduction
MachineLlearning introduction
 
Introduction to ml
Introduction to mlIntroduction to ml
Introduction to ml
 
Machine Learning.pdf
Machine Learning.pdfMachine Learning.pdf
Machine Learning.pdf
 
Top 100+ Google Data Science Interview Questions.pdf
Top 100+ Google Data Science Interview Questions.pdfTop 100+ Google Data Science Interview Questions.pdf
Top 100+ Google Data Science Interview Questions.pdf
 
Sample_Subjective_Questions_Answers (1).pdf
Sample_Subjective_Questions_Answers (1).pdfSample_Subjective_Questions_Answers (1).pdf
Sample_Subjective_Questions_Answers (1).pdf
 
Bengkel smartPLS 2011
Bengkel smartPLS 2011Bengkel smartPLS 2011
Bengkel smartPLS 2011
 

Recently uploaded

Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFxolyaivanovalion
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...amitlee9823
 
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAroojKhan71
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Researchmichael115558
 
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...amitlee9823
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% SecurePooja Nehwal
 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxJohnnyPlasten
 
Edukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxEdukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxolyaivanovalion
 
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Callshivangimorya083
 
Zuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptxZuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptxolyaivanovalion
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxolyaivanovalion
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Delhi Call girls
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusTimothy Spann
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceDelhi Call girls
 
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightCheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightDelhi Call girls
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxolyaivanovalion
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxolyaivanovalion
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxolyaivanovalion
 

Recently uploaded (20)

Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFx
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Research
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
 
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptx
 
Edukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxEdukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFx
 
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
 
Zuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptxZuja dropshipping via API with DroFx.pptx
Zuja dropshipping via API with DroFx.pptx
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptx
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and Milvus
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
 
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 nightCheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
Cheap Rate Call girls Sarita Vihar Delhi 9205541914 shot 1500 night
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptx
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptx
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptx
 

A Unified Approach to Interpreting Model Predictions (SHAP)

  • 1. A Unified Approach to Interpreting Model Predictions Rama Irsheidat Scott M. Lundberg. et al.
  • 2. Introduction SHAP Additive Feature Attribution Methods Simple Properties Uniquely Determine Additive Feature Attributions 02 01 04 03 TABLE OF CONTENTS Computational and User Study Experiments 05 Conclusion 06
  • 3. Why do we care so much about explainability in ML ?
  • 6. Introduction Understanding why a model makes a certain prediction can be as crucial as the prediction’s accuracy in many applications
  • 7. Introduction The reason this problem exists is with in Large Datasets complex models tend to be very accurate but hard to interpret
  • 8. Introduction Focus on explaining individual predictions one at a time.
  • 9. Introduction We are replacing the input to a summation that you would get in a linear model with something that represents the importance of that feature in the complicated model.
  • 10. Interprets individual model predictions based on locally approximating the model around a given prediction. Interprets the predictions of deep networks. Recursive prediction explanation method for deep learning. Shapley regression values Quantitative Input Influence Shapley sampling values LIME DeepLIFT Layer-Wise Relevance Propagation Feature importance for linear models. This method requires retraining the model on all features. It assigns an importance value to each feature that represents the effect on the model prediction of including that feature. Explaining any model by Applying sampling approximations to Equation in Shapley reg. values , and Approximating the effect of removing a variable from the model by integrating over samples from the training dataset. Proposes a sampling approximation to Shapley values that is nearly identical to Shapley sampling values.
  • 11. Interprets individual model predictions based on locally approximating the model around a given prediction. Interprets the predictions of deep networks. Recursive prediction explanation method for deep learning. Shapley regression values Quantitative Input Influence Shapley sampling values LIME DeepLIFT Layer-Wise Relevance Propagation Feature importances for linear models in the presence of multicollinearity. This method requires retraining the model on all features. It assigns an importance value to each feature that represents the effect on the model prediction of including that feature. Explain any model by Applying sampling approximations to Equation in Shapley reg. values , and Approximating the effect of removing a variable from the model by integrating over samples from the training dataset. Proposes a sampling approximation to Shapley values that is nearly identical to Shapley sampling values. Additive Feature Attribution Methods
  • 12. Additive Feature Attribution Methods Interprets individual model predictions based on locally approximating the model around a given prediction. Interprets the predictions of deep networks. Recursive prediction explanation method for deep learning. Shapley regression values Quantitative Input Influence Shapley sampling values LIME DeepLIFT Layer-Wise Relevance Propagation Feature importances for linear models in the presence of multicollinearity. This method requires retraining the model on all features. It assigns an importance value to each feature that represents the effect on the model prediction of including that feature. Explain any model by Applying sampling approximations to Equation in Shapley reg. values , and Approximating the effect of removing a variable from the model by integrating over samples from the training dataset. Proposes a sampling approximation to Shapley values that is nearly identical to Shapley sampling values. Have some better theoretical grounding but slower computation
  • 13. Additive Feature Attribution Methods Interprets individual model predictions based on locally approximating the model around a given prediction. Interprets the predictions of deep networks. Recursive prediction explanation method for deep learning. Shapley regression values Quantitative Input Influence Shapley sampling values LIME DeepLIFT Layer-Wise Relevance Propagation Feature importances for linear models in the presence of multicollinearity. This method requires retraining the model on all features. It assigns an importance value to each feature that represents the effect on the model prediction of including that feature. Explain any model by Applying sampling approximations to Equation in Shapley reg. values , and Approximating the effect of removing a variable from the model by integrating over samples from the training dataset. Proposes a sampling approximation to Shapley values that is nearly identical to Shapley sampling values. Have faster estimation but less guarantees
  • 14. Interprets individual model predictions based on locally approximating the model around a given prediction. Interprets the predictions of deep networks. Recursive prediction explanation method for deep learning. Shapley regression values Quantitative Input Influence Shapley sampling values LIME DeepLIFT Layer-Wise Relevance Propagation Feature importances for linear models in the presence of multicollinearity. This method requires retraining the model on all features. It assigns an importance value to each feature that represents the effect on the model prediction of including that feature. Explain any model by Applying sampling approximations to Equation in Shapley reg. values , and Approximating the effect of removing a variable from the model by integrating over samples from the training dataset. Proposes a sampling approximation to Shapley values that is nearly identical to Shapley sampling values.
  • 15. SHAP
  • 16. How should we define importance of each feature (φi (f , x))
  • 17. Base rate of loan rejection or how often do people get their loans denied on average?
  • 18. Why am I 55 percent ?
  • 19. We have to explain is this 35 percent difference here . So how can we do this?
  • 20. We should just take the expected value of the output of our model (Base rate), then we can just introduce a term into that conditional expectation. Fact that John's 20, his risk jumps up by 15 percent.
  • 21. We should just take the expected value of the output of our model (Base rate), then we can just introduce a term into that conditional expectation. A very risky profession and that jumps the risk up to 70 percent.
  • 22. We should just take the expected value of the output of our model (Base rate), then we can just introduce a term into that conditional expectation.
  • 23. We should just take the expected value of the output of our model (Base rate), then we can just introduce a term into that conditional expectation. He made a ton of money in the stock market last year. So his capital gains pushes him down to 55 percent.
  • 24. We have to explain is this 35 percent difference here . So how can we do this? We've basically divided up how we got from here to here by conditioning one at a time on all the features until we've conditioned on all of them.
  • 25.
  • 27. We can't just pick a particular order and think that we've solved it so what do we do here? Meaningless
  • 28. Simple Properties Uniquely Determine Additive Feature Attributions
  • 29. Means the output of the explanation model matches the original model for the prediction being explained. Requires features missing in the original input to have no impact. If you change the original model such that a feature has a larger impact in every possible ordering, then that input's attribution (importance) should not decrease. Shapley Properties Local accuracy Missingness Consistency
  • 30. SHAP values arise from averaging the φi values across all possible orderings. Very painful to compute.
  • 32. 1. Model-Agnostic Approximations 1.1 Shapley sampling values 2.1 Kernel SHAP (Linear LIME + Shapley values) Linear LIME (uses a linear explanation model) fit a linear model locally to the original model that we're trying to explain. Shapley values are the only possible solution that satisfies Properties 1-3 – local accuracy, missingness and consistency. This means we can now estimate the Shapley values using linear regression.
  • 33. 2. Model-Specific Approximations 2.1 Linear SHAP For linear models, SHAP values can be approximated directly from the model’s weight coefficients. 2.2 Low-Order SHAP 3.2 Max SHAP Calculating the probability that each input will increase the maximum value over every other input.
  • 34. 2. Model-Specific Approximations 2.4 Deep SHAP (DeepLIFT + Shapley values) DeepLIFT Recursive prediction explanation method for deep learning that satisfies local accuracy and missingness, we know that Shapley values represent the attribution values that satisfy consistency. Adapting DeepLIFT to become a compositional approximation of SHAP values, leading to Deep SHAP.
  • 35. Computational and User Study Experiments
  • 36. 1. Computational Efficiency Comparing Shapley sampling, SHAP, and LIME on both dense and sparse decision tree models illustrates both the improved sample efficiency of Kernel SHAP and that values from LIME can differ significantly from SHAP values that satisfy local accuracy and consistency.
  • 37. 2. Consistency with Human Intuition (A) Attributions of sickness score (B) Attributions of profit among three men Participants were asked to assign importance for the output (the sickness score or money won) among the inputs (i.e., symptoms or players). We found a much stronger agreement between human explanations and SHAP than with other methods.
  • 38. 3. Explaining Class Differences Explaining the output of a convolutional network trained on the MNIST digit dataset. (A) Red areas increase the probability of that class, and blue areas decrease the probability . Masked removes pixels in order to go from 8 to 3. (B) The change in log odds when masking over 20 random images supports the use of better estimates of SHAP values.
  • 39. Conclusion • The growing tension between the accuracy and interpretability of model predictions has motivated the development of methods that help users interpret predictions. • The SHAP framework identifies the class of additive feature importance methods (which includes six previous methods) and shows there is a unique solution in this class that adheres to desirable properties. • We presented several different estimation methods for SHAP values, along with proofs and experiments showing that these values are desirable.