From Event to Action: Accelerate Your Decision Making with Real-Time Automation
Trust in Digital Agents
1. Crystal Sundaramoorthy
Trust in Digitally Mediated
Environments
Technological advancements are enabling people
to be partners with technology rather than just
users of it
Partnerships require trust
PSYC 6216 Fall 2018 1
2. Crystal Sundaramoorthy
Agentive Technology of Today
Amazon Alexa
www.amazon.com/echo
Alexa is always getting smarter—the more
you use Echo, the more Alexa adapts to
your speech patterns, vocabulary, and
personal preferences
ShotSpotter
www.shotspotter.com
ShotSpotter is able to fill the gunfire data
gap with a network of acoustic sensors
and sophisticated software to detect,
locate and provide precise information
about 90% or more of gunfire incidents in
less than 60 seconds
Nest Thermostat
www.nest.com
The Nest Learning Thermostat
automatically adapts as your life and
seasons change. Just use it for a week
and it programs itself
Waymo Self Driving Car
www.waymo.com
An independent self-driving technology
company with a mission to make it safe
and easy for everyone to get
around—without the need for anyone in the
driver’s seat
PSYC 6216 Fall 2018 2
3. Crystal Sundaramoorthy
Artificial General Intelligence
When will it arrive?
10% 50% 90%
2022 2040 2075
Optimistic
Estimate
Realistic
Estimate
Pessimistic
Estimate
Noessel (2017) 3
4. Crystal Sundaramoorthy
Improper Use
At best, people don’t reap
full benefits. At worst,
human life can be at stake
Proper Use
Allows people to
accomplish their goals
more quickly and efficiently
Risks and Benefits
Using Autonomous Driving Vehicles as an Example
Parasuraman & Riley (2016) 4
ABUSE
DISUSE
MISUSE
USE
➔ Americans travel > 3 trillion
miles per year
➔ Autonomous shared
vehicles cost $0.20 per mile
compared to $1.50 per mile
➔ 80 % of Seniors fear
autonomous vehicles but
stand the most to gain
➔ 37K car fatalities per year in
U.S.
5. Crystal Sundaramoorthy
What is Trust?
Two Definitions
"willingness of a party to be vulnerable to the actions of another party based on the
expectation that the other will perform a particular action important to the trustor,
irrespective of the ability to monitor or control that party" - Mayer et al. (1996)
"trust is the attitude that an agent will help achieve an individual's goals in a situation
characterized by uncertainty and vulnerability" - Lee & See (2004)
PSYC 6216 Fall 2018 5
6. Crystal Sundaramoorthy
Neuroscience of Trust
Areas more associated with Cognition - anticipating rewards, prediction,
and estimating uncertainty
● Paracingulate Cortex (PCC)
● Caudate Nucleus
● Ventral Tegmental Area (VTA)
Dimoka (2010) 6
7. Crystal Sundaramoorthy
Neuroscience of Distrust
Areas more associated with Emotion - dealing with fear and preventing
loss
● Insular Cortex
● Amygdala
Dimoka (2010) 7
9. Crystal Sundaramoorthy
Foundations of Trust
Three areas upon which the trustee is evaluated
Performance Integrity Purpose
Capability
to perform the work expected
Process and Algorithms
used to carry out the actions
required
Perceived Benevolence
of the agent
Lee & See (2004) 9
11. Crystal Sundaramoorthy
PsychSim
Agent-based modeling of social interactions and influence
● Social Simulation Tool
● Utilizes POMDP Model to provide
multiple agents with a theory of mind
● Initially designed to test teacher
approaches to different types of bullies
Screenshot of PsychSim Interface
Marsella, Pynadath, & Read (2004) 11
12. Crystal Sundaramoorthy
Trust Calibration within a Human-Robot Team
Comparing Automatically Generated Explanations
Utilizing PsychSim to test effects of different
explanations on trust calibration
The Study by Wang, Pynadath, & Hill
● Scenario places human on foreign
reconnaissance mission in hostile territory
with intelligent robot
● Two robot capabilities - 100% accuracy and
60% accuracy
● Three explanation conditions - No explanation,
Observation explanation. Confidence
explanation
Wang, Pynadath, & Hill (2016) 12
13. Crystal Sundaramoorthy
Explanation Examples
No Explanation Observation Confidence
“I have finished surveying
the doctor’s office. I think
the place is safe.”
“I have finished surveying
the Cafe. I think the place is
safe. My sensors have not
detected any NBC weapons
in here. From the image
captured by my camera, I
have not detected any
armed gunmen in the cafe.
My microphone picked up a
friendly conversation.”
“I have finished surveying
the Cafe. I think the place is
dangerous. I am 78%
confident about this
assessment.”
Wang, Pynadath, & Hill (2016) 13
14. Crystal Sundaramoorthy
The Results
Trust - Self-reported rating of trustworthiness on 7 point
scale
Transparency - Self-reported rating of understanding
robot’s decision making process on 7 point scale
Compliance - Percent participants decision matched
robot’s recommendation
Mission Success - Percent of successful missions overall
Correct Decisions - Percent of decisions that were
correct
● Explanations positively influence trust,
transparency, and mission success
● Confidence explanations provide more accurate
trust calibration with low ability robots
● Lack of 100% compliance wiht 100% accurate robot
demonstrates challenges in trust calibration and full
utilization of highly competent agents
Wang, Pynadath, & Hill (2016) 14
15. Crystal Sundaramoorthy
Robots Developing Trust of Humans
Accommodating Human Variability in Human-Robot Teams through Theory of Mind
Robots must also develop trust and a theory of mind for their
Human Partners.
Without it the robot only has two options for its decision making:
A. 100% trust and blindly follow the human
B. 0% trust and ask the human to confirm actions incessantly
The Study by Hiatt, Harrison, and Trafton
● Utilized Robot with ACT-R Cognitive
Architecture
● 2 Scenarios - Human behaving accurately but
unexpectedly, Human behaving in error
● 3 Conditions - Theory of Mind, Simple
Correction, and Blindly Follow
Hiatt, Harrison, & Trafton (2011) 15
16. Crystal Sundaramoorthy
Robots Developing Theory of Mind for Humans
The Results
When rated on 3 point and 7 point
scales, the Theory of Mind robot
performed better regarding both
Intelligence and Naturalness
Hiatt, Harrison, & Trafton (2011) 16
17. Crystal Sundaramoorthy
Future Study Recommendations
01Replace self report
evaluation methods with
neuroscientific studies
03Expand cultural and
organizational studies to
examine utilization of
digital agents
02Expand the use of
Theory of Mind
simulators
04Academic study and
evaluation of existing
digital agent design
guidelines as they relate
to trust calibration
www.websitename.com 17
18. Crystal Sundaramoorthy
Digital Agent Design Guidelines
Areas for academic study and evaluation
Conveying capabilities and limitations:
Help your user learn what the agentive
technology can and cannot do.
Understanding your user’s goals and
preferences: Specify how the agent learns
what the user wants to accomplish and
how they want it accomplished.
Permissions and authorizations: Help the
agent build trust and get permission to
access the information that will help the
agent do its job.
Pause and restart: Give the users obvious
controls to put the agent on hold and
resume it
again.
Monitoring and Notifications: Provide a
way for them to check on the agent, and
help them understand and build confidence
in the agent’s performance.
Play and Practice: Some users will want to
play alongside the agent as it does its work,
to keep in practice or to see if they can
beat the agent at its own game.
Limited resources: Signal when the agent
is running out of something that it needs to
run and how the user can replenish it most
easily.
Simple manipulations: If the agent is
embodied in a robot, ensure that the user
can make corrections physically.
Tuning triggers and behaviors: Design
easy controls to correct the agent when it
reacts to things it shouldn’t or correct the
agent when it behaves in undesirable ways.
Handing off to the user or an
intermediate: Ensure that signals are clear
and assurances are comforting when
control shifts.
Takeback: Determine if the agent should
take back control automatically as soon as
it can, or if it should be a manual process.
Disengagement and death: Ensure that the
agent can detect whether it has outlived its
use or its user and gracefully handle the
disengagement. Pass control to another if
appropriate.
Noessel (2017) 18