This slide presents a short summary of my talk at ACM IUI 2023. You can download the full paper from this link - https://arxiv.org/abs/2302.10671.
Paper Title: Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations
Abstract: Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare. However, little research has compared the utility of different explanation methods in guiding healthcare experts for patient care. Moreover, it is unclear how useful, understandable, actionable and trustworthy these methods are for healthcare experts, as they often require technical ML knowledge. This paper presents an explanation dashboard that predicts the risk of diabetes onset and explains those predictions with data-centric, feature-importance, and example-based explanations. We designed an interactive dashboard to assist healthcare experts, such as nurses and physicians, in monitoring the risk of diabetes onset and recommending measures to minimize risk. We conducted a qualitative study with 11 healthcare experts and a mixed-methods study with 45 healthcare experts and 51 diabetic patients to compare the different explanation methods in our dashboard in terms of understandability, usefulness, actionability, and trust. Results indicate that our participants preferred our representation of data-centric explanations that provide local explanations with a global overview over other methods. Therefore, this paper highlights the importance of visually directive data-centric explanation method for assisting healthcare experts to gain actionable insights from patient health records. Furthermore, we share our design implications for tailoring the visual representation of different explanation methods for healthcare experts.
3. Explainable Decision Support Systems in Healthcare
ML-based
Decision Support Systems
XAI Methods
Explainable Decision Support Systems
4. Explainable Decision Support Systems in Healthcare
ML-based
Decision Support Systems
XAI Methods
Explainable Decision Support Systems
Healthcare Experts Explainable Interface for
monitoring the risk of
diabetes onset for patients
Understand the rationale
behind the predicted risk
of diabetes onset
Monitoring the Risk of
Type 2 Diabetes Onset
11. Feature Importance Explanations (Model-Centric Explanations)
• Feature importance explainability is a model-centric explanation method as it estimates the
importance of features in the model that have the most influence on its output or prediction.
• Examples of feature importance methods are permutation importance, partial dependence
plots, LIME based feature importance and Shapley values (SHAP) based feature importance.
12. Data-Centric Explanations
• Data-centric explainability focuses on examining the data used to train the model rather than the model's internal
workings. The idea is that by analyzing the training data, we can gain insights into how the model makes its
predictions and identify potential biases or errors.
• Examples of data-centric explanation approaches include summarizing datasets using common statistical methods
like mean, mode, and variance, visualizing the data distributions to compare feature values to those across the
remaining dataset, and observing changes in model predictions through what-if analysis to probe into the
sensitivity of the features.
• Additionally, data-centric explanations include creating more awareness about the data quality by sharing more
insights about the various data issues, such as data drift, skewed data, outliers, correlated features and etc., that
can impact the overall performance of the ML models.
13. Counterfactual Explanations (Example-based Explanations)
• Counterfactual explanations are example-based methods that provide minimum
conditions required to obtain an alternate decision.
• Rather than explaining the inner working of the model, counterfactuals can guide users to
obtain their desired predictions.
* Applied Machine Learning Explainability Techniques, A.
Bhattacharya
*
15. Research Questions
RQ1. In what ways do patients and HCPs find our visually directive explanation
dashboard useful for monitoring and evaluating the risk of diabetes onset?
RQ2. In what ways do HCP and patients perceive data-centric, model-centric, and
example-based visually directive explanations in terms of usefulness, understandability,
and trustworthiness in the context of healthcare?
RQ3. In what ways do visually directive explanations facilitate patients and HCPs to take
action for improving patient conditions?
16. Iterative User-Centric Design and Evaluation Process
Low-fidelity prototype High-fidelity prototype
Figma click-through prototype
Interactive web application prototype
11 healthcare experts
Qualitative study through 1:1 interviews
45 healthcare experts and 51 diabetes patients
Mixed-methods study through online questionnaires
Thematic analysis for evaluation
Evaluation through descriptive statistics, test of
proportion, and analyzing participant-reported
Likert scale question
18. Combining XAI methods to address different dimensions of explainability
* Applied Machine Learning Explainability Techniques, A.
Bhattacharya
*
19. Tailoring Directive Explanations for Healthcare Experts
o Increasing actionability through interactive what-if analysis
o Explanations through actionable features instead of non-actionable features
o Color-coded visual indicators
o Data-centric directive explanations
* These design implications are aligned with the recommendations from Wang et al. [2019] -
Designing Theory-Driven User-Centric Explainable AI
20. Summarizing the contribution of this research
1. Combining XAI methods to address different dimensions of explainability
2. Visually directive data-centric explanations that provide local explanations with a global overview
3. The design of a directive explanation dashboard that combines different explanation methods and
further compared them in terms of understandability, usefulness, actionability, and trustworthiness
with healthcare experts and patients.
4. Design implications for tailoring visually directive explanations for healthcare experts
21. Summarizing the contribution of this research
1. Combining XAI methods to address different dimensions of explainability
2. Visually directive data-centric explanations that provide local explanations with a global overview
3. The design of a directive explanation dashboard that combines different explanation methods and
further compared them in terms of understandability, usefulness, actionability, and trustworthiness
with healthcare experts and patients.
4. Design implications for tailoring visually directive explanations for healthcare experts
22. Summarizing the contribution of this research
1. Combining XAI methods to address different dimensions of explainability
2. Visually directive data-centric explanations that provide local explanations with a global overview
3. The design of a directive explanation dashboard that combines different explanation methods and
further compared them in terms of understandability, usefulness, actionability, and trustworthiness
with healthcare experts and patients.
4. Design implications for tailoring visually directive explanations for healthcare experts
23. Summarizing the contribution of this research
1. Combining XAI methods to address different dimensions of explainability
2. Visually directive data-centric explanations that provide local explanations with a global overview
3. The design of a directive explanation dashboard that combines different explanation methods and
further compared them in terms of understandability, usefulness, actionability, and trustworthiness
with healthcare experts and patients.
4. Design implications for tailoring visually directive explanations for healthcare experts
24. Thank you for your attention!
Directive Explanations for Monitoring the Risk of Diabetes
Onset:
Introducing Directive Data-Centric Explanations and
Combinations to Support What-If Explorations
Aditya Bhattacharya
aditya.bhattacharya@kuleuven.be
@adib0073
Jeroen Ooge
jeroen.ooge@kuleuven.be
@JeroenOoge
Gregor Stiglic
gregor.stiglic@um.si
@GStiglic
Katrien Verbert
katrien.verbert@kuleuven.be
@katrien_v
Editor's Notes
Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare
Existing XAI methods such as LIME, SHAP, Saliency Maps and others are predominantly designed for ML experts and little research has compared the utility of these different explanation methods in guiding healthcare experts who may not have technical ML knowledge, for patient care.
Additionally, current XAI methods provide explanations through complex visualizations which are static and difficult to understand for healthcare experts.
These gaps highlight the necessity for analyzing and comparing explanation methods with healthcare professionals (HCPs) such as nurses and physicians. (1 min)
Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare
Existing XAI methods such as LIME, SHAP, Saliency Maps and others are predominantly designed for ML experts and little research has compared the utility of these different explanation methods in guiding healthcare experts who may not have technical ML knowledge, for patient care.
Additionally, current XAI methods provide explanations through complex visualizations which are static and difficult to understand for healthcare experts.
These gaps highlight the necessity for analyzing and comparing explanation methods with healthcare professionals (HCPs) such as nurses and physicians. (1 min)
Our research particularly focuses on providing an explainable interface for an ML-based system used for monitoring the risk of diabetes onset which could be used by healthcare experts such as nurses and physicians.
To understand the real needs of our users in detail, we first conducted an exploratory focus group discussion with 4 nurses.
Method – We first showed them SHAP based explanations for explaining the model predicted risk of diabetes onset
We then conducted a codesign session with our participants to understand the key components of the explainable interface.
Results: As a result of this study, we formulated the responses of our participants into the following User Requirements:
Additionally, our user conveyed that visualizations for SHAP based explanations are complex and they need simpler visualizations to communicate with patients
* Is it important to highlight about the tasks? (2 slides, 1.5 mins)
Our research particularly focuses on providing an explainable interface for an ML-based system used for monitoring the risk of diabetes onset which could be used by healthcare experts such as nurses and physicians.
To understand the real needs of our users in detail, we first conducted an exploratory focus group discussion with 4 nurses.
Method – We first showed them SHAP based explanations for explaining the model predicted risk of diabetes onset
We then conducted a codesign session with our participants to understand the key components of the explainable interface.
Results: As a result of this study, we formulated the responses of our participants into the following User Requirements:
Additionally, our user conveyed that visualizations for SHAP based explanations are complex and they need simpler visualizations to communicate with patients
* Is it important to highlight about the tasks? (2 slides, 1.5 mins)
This research work presents our Visually Directive Explanation Dashboard, which we developed following an iterative user-centric design process to satisfy our user requirements.
We included model-agnostic local explanation methods to meet our explanation goals considering our user requirements.
Our dashboard included feature importance explanations – Important Risk Factors
Data Centric explanations – VC1, VC2 and V5
Counterfactual Explanations – Recommendations to reduce risk
Another video – separate (30-45 secs)
We further tailored the representation of these explanation methods.
We mainly included interactive explanations that supported what-if explorations instead of static representations. Our users can alter the selected feature value to observe any change in the predicted risk
We also separated emphasized on actionable health variables over non-actionable ones as the users can alter these actionable variable to obtain their favourable outcome.
We also categorized the actionable features as patient measures – these provide information patient vitals like blood sugar, BMI etc. and patient behaviours – which provides information from behavioral information captured through FINDRISC questionnaires
Our customizations also include providing information about feasibility and impact of counterfactual information presented as recommendations.
This research work presents our Visually Directive Explanation Dashboard, which we developed following an iterative user-centric design process to satisfy our user requirements.
We included model-agnostic local explanation methods to meet our explanation goals considering our user requirements.
Our dashboard included feature importance explanations – Important Risk Factors
Data Centric explanations – VC1, VC2 and V5
Counterfactual Explanations – Recommendations to reduce risk
Another video – separate (30-45 secs)
We further tailored the representation of these explanation methods.
We mainly included interactive explanations that supported what-if explorations instead of static representations. Our users can alter the selected feature value to observe any change in the predicted risk
We also separated emphasized on actionable health variables over non-actionable ones as the users can alter these actionable variable to obtain their favourable outcome.
We also categorized the actionable features as patient measures – these provide information patient vitals like blood sugar, BMI etc. and patient behaviours – which provides information from behavioral information captured through FINDRISC questionnaires
Our customizations also include providing information about feasibility and impact of counterfactual information presented as recommendations.
This research work presents our Visually Directive Explanation Dashboard, which we developed following an iterative user-centric design process to satisfy our user requirements.
We included model-agnostic local explanation methods to meet our explanation goals considering our user requirements.
Our dashboard included feature importance explanations – Important Risk Factors
Data Centric explanations – VC1, VC2 and V5
Counterfactual Explanations – Recommendations to reduce risk
Another video – separate (30-45 secs)
We further tailored the representation of these explanation methods.
We mainly included interactive explanations that supported what-if explorations instead of static representations. Our users can alter the selected feature value to observe any change in the predicted risk
We also separated emphasized on actionable health variables over non-actionable ones as the users can alter these actionable variable to obtain their favourable outcome.
We also categorized the actionable features as patient measures – these provide information patient vitals like blood sugar, BMI etc. and patient behaviours – which provides information from behavioral information captured through FINDRISC questionnaires
Our customizations also include providing information about feasibility and impact of counterfactual information presented as recommendations.
This research work presents our Visually Directive Explanation Dashboard, which we developed following an iterative user-centric design process to satisfy our user requirements.
We included model-agnostic local explanation methods to meet our explanation goals considering our user requirements.
Our dashboard included feature importance explanations – Important Risk Factors
Data Centric explanations – VC1, VC2 and V5
Counterfactual Explanations – Recommendations to reduce risk
Another video – separate (30-45 secs)
We further tailored the representation of these explanation methods.
We mainly included interactive explanations that supported what-if explorations instead of static representations. Our users can alter the selected feature value to observe any change in the predicted risk
We also separated emphasized on actionable health variables over non-actionable ones as the users can alter these actionable variable to obtain their favourable outcome.
We also categorized the actionable features as patient measures – these provide information patient vitals like blood sugar, BMI etc. and patient behaviours – which provides information from behavioral information captured through FINDRISC questionnaires
Our customizations also include providing information about feasibility and impact of counterfactual information presented as recommendations.
We wanted to address the following research questions using our Visually Directive Explanation Dashboard
- In general we wanted to analyze and compare the understandability, usefulness, actionability, and trust of the different explanation methods included in our dashboard with HCPs who are our primary users and patients who could be our potential users.
We followed an iterative user-centric design process for the design and evaluation of our dashboard.
We first designed a low-fidelity click-through prototype in Figma in multiple iterations. Here you can see the final version of the low-fidelity process
We conducted a qualitative user study through 1:1 interviews with 11 healthcare experts.
We evaluated our qualitative interview data using thematic analysis.
We also utilized the feedback to perform design changes for our high-fidelity prototype
Particularly, we improved the discoverability of our interactive visual explanation methods through tooltips and explicit visual indicators
Overall, the healthcare experts were positive about the utility of this dashboard and further suggested that patients can directly use this as a self-monitoring tool.
So we included patients as our participants in the next study.
We then designed and developed our high-fidelity web application prototype.
We conducted a mixed-methods study with 45 healthcare experts and 51 patients through online questionnaires.
We evaluated the data gathered through descriptive statistics, test of proportion and analyzing participant reported Likert scale questions and their justifications.
We finally addressed our research questions and summarized our research findings considering collective feedback from our two user studies.
Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare
Existing XAI methods such as LIME, SHAP, Saliency Maps and others are predominantly designed for ML experts and little research has compared the utility of these different explanation methods in guiding healthcare experts who may not have technical ML knowledge, for patient care.
Additionally, current XAI methods provide explanations through complex visualizations which are static and difficult to understand for healthcare experts.
These gaps highlight the necessity for analyzing and comparing explanation methods with healthcare professionals (HCPs) such as nurses and physicians. (1 min)
We share our design implications for tailoring the visual representation of directive explanations for healthcare experts from our observations and results
Our modified design of this visual component (VC3) used in our high-fidelity prototype enabled them to perform interactive what-if analysis, i.e. allowed them to change the feature values and observe the change in the overall prediction. Hence, we recommend the usage of interactive design elements that allows what-if analysis for representing directive explanations for HCPs. This recommendation also supports hypothesis generation
In our approach, we included only actionable variables for visual components which supports what-if interactions and better identification of coherent factors [57 ]. We anticipated that allowing the ability to alter values of non-actionable variables can create confusion for HCPs, especially for representing counterfactual explanations.
HCPs indicated that the color-coded representations of risk factors were very useful for getting quick insights. Hence, we recommend the usage of color-coded representations and visual indicators to highlight factors that can increase or decrease the predictor variable. This further facilitates the identification of coherent factors.
HCPs indicated that our representation of data-centric explainability through the patient summary was very informative. They could easily identify how good or bad the risk factors are for a specific patient. Additionally, they could get an overview of how other patients are doing as compared to a specific patient through the data-distribution charts. Thus, our representation of data-centric explainability provided a local explanation but with a global perspective. Furthermore, data-centric directive explanations support forward reasoning by providing accessto source and situational data and yet can be easily integrated with multiple explanation methods.
We share our design implications for tailoring the visual representation of directive explanations for healthcare experts from our observations and results
Our modified design of this visual component (VC3) used in our high-fidelity prototype enabled them to perform interactive what-if analysis, i.e. allowed them to change the feature values and observe the change in the overall prediction. Hence, we recommend the usage of interactive design elements that allows what-if analysis for representing directive explanations for HCPs. This recommendation also supports hypothesis generation
In our approach, we included only actionable variables for visual components which supports what-if interactions and better identification of coherent factors [57 ]. We anticipated that allowing the ability to alter values of non-actionable variables can create confusion for HCPs, especially for representing counterfactual explanations.
HCPs indicated that the color-coded representations of risk factors were very useful for getting quick insights. Hence, we recommend the usage of color-coded representations and visual indicators to highlight factors that can increase or decrease the predictor variable. This further facilitates the identification of coherent factors.
HCPs indicated that our representation of data-centric explainability through the patient summary was very informative. They could easily identify how good or bad the risk factors are for a specific patient. Additionally, they could get an overview of how other patients are doing as compared to a specific patient through the data-distribution charts. Thus, our representation of data-centric explainability provided a local explanation but with a global perspective. Furthermore, data-centric directive explanations support forward reasoning by providing accessto source and situational data and yet can be easily integrated with multiple explanation methods.
We share our design implications for tailoring the visual representation of directive explanations for healthcare experts from our observations and results
Our modified design of this visual component (VC3) used in our high-fidelity prototype enabled them to perform interactive what-if analysis, i.e. allowed them to change the feature values and observe the change in the overall prediction. Hence, we recommend the usage of interactive design elements that allows what-if analysis for representing directive explanations for HCPs. This recommendation also supports hypothesis generation
In our approach, we included only actionable variables for visual components which supports what-if interactions and better identification of coherent factors [57 ]. We anticipated that allowing the ability to alter values of non-actionable variables can create confusion for HCPs, especially for representing counterfactual explanations.
HCPs indicated that the color-coded representations of risk factors were very useful for getting quick insights. Hence, we recommend the usage of color-coded representations and visual indicators to highlight factors that can increase or decrease the predictor variable. This further facilitates the identification of coherent factors.
HCPs indicated that our representation of data-centric explainability through the patient summary was very informative. They could easily identify how good or bad the risk factors are for a specific patient. Additionally, they could get an overview of how other patients are doing as compared to a specific patient through the data-distribution charts. Thus, our representation of data-centric explainability provided a local explanation but with a global perspective. Furthermore, data-centric directive explanations support forward reasoning by providing accessto source and situational data and yet can be easily integrated with multiple explanation methods.
We share our design implications for tailoring the visual representation of directive explanations for healthcare experts from our observations and results
Our modified design of this visual component (VC3) used in our high-fidelity prototype enabled them to perform interactive what-if analysis, i.e. allowed them to change the feature values and observe the change in the overall prediction. Hence, we recommend the usage of interactive design elements that allows what-if analysis for representing directive explanations for HCPs. This recommendation also supports hypothesis generation
In our approach, we included only actionable variables for visual components which supports what-if interactions and better identification of coherent factors [57 ]. We anticipated that allowing the ability to alter values of non-actionable variables can create confusion for HCPs, especially for representing counterfactual explanations.
HCPs indicated that the color-coded representations of risk factors were very useful for getting quick insights. Hence, we recommend the usage of color-coded representations and visual indicators to highlight factors that can increase or decrease the predictor variable. This further facilitates the identification of coherent factors.
HCPs indicated that our representation of data-centric explainability through the patient summary was very informative. They could easily identify how good or bad the risk factors are for a specific patient. Additionally, they could get an overview of how other patients are doing as compared to a specific patient through the data-distribution charts. Thus, our representation of data-centric explainability provided a local explanation but with a global perspective. Furthermore, data-centric directive explanations support forward reasoning by providing accessto source and situational data and yet can be easily integrated with multiple explanation methods.
This paper presents three primary research contributions
Visually directive data-centric explanations that provide local explanations of the predicted risk for individual patients with a global overview of risk factors for the entire patient population.
The design of a directive explanation dashboard that combines visually represented data-centric, feature-importance, and counterfactual explanations and further compared the different visual explanations in terms of understandability, usefulness, actionability, and trustworthiness with healthcare experts and patients.
Design implications for tailoring explanations for healthcare experts based on observations of our user-centered design process and an elaborate user study
This paper presents three primary research contributions
Visually directive data-centric explanations that provide local explanations of the predicted risk for individual patients with a global overview of risk factors for the entire patient population.
The design of a directive explanation dashboard that combines visually represented data-centric, feature-importance, and counterfactual explanations and further compared the different visual explanations in terms of understandability, usefulness, actionability, and trustworthiness with healthcare experts and patients.
Design implications for tailoring explanations for healthcare experts based on observations of our user-centered design process and an elaborate user study
This paper presents three primary research contributions
Visually directive data-centric explanations that provide local explanations of the predicted risk for individual patients with a global overview of risk factors for the entire patient population.
The design of a directive explanation dashboard that combines visually represented data-centric, feature-importance, and counterfactual explanations and further compared the different visual explanations in terms of understandability, usefulness, actionability, and trustworthiness with healthcare experts and patients.
Design implications for tailoring explanations for healthcare experts based on observations of our user-centered design process and an elaborate user study
This paper presents three primary research contributions
Visually directive data-centric explanations that provide local explanations of the predicted risk for individual patients with a global overview of risk factors for the entire patient population.
The design of a directive explanation dashboard that combines visually represented data-centric, feature-importance, and counterfactual explanations and further compared the different visual explanations in terms of understandability, usefulness, actionability, and trustworthiness with healthcare experts and patients.
Design implications for tailoring explanations for healthcare experts based on observations of our user-centered design process and an elaborate user study