This study extends the learning and use of affordances on robots on two fronts. First, we use the very same affordance
learning framework that was used for learning the affordances of inanimate things to learn social affordances, that is affordances whose existence requires the presence of humans. Second, we use the learned affordances for making multistep
plans.
Specifically, an iCub humanoid platform is equipped with a perceptual system to sense objects placed on a table, as well as the presence and state of humans in the environment, and a behavioral repertoire that consisted of simple object manipulations as well as voice behaviors that are uttered simple verbs. After interacting with objects and humans, the robot learns a set of affordances with which it can make multi-step plans towards achieving a demonstrated goal.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Learning Social Affordances and Using Them for Planning
1. Approach
From sensorimotor experiences to planning.
Motivation
Robots that can perceive social affordances and utilize them in multi-step planning to achieve a given/demonstrated goal.
Experimental Setup
Feature display
Human
tracker
kinect
Gaze
mocap
Tabletop
kinect
World displayThis study extends the learning and use
of affordances on robots on two fronts.
First, we use the very same affordance
learning framework that was used for
learning the affordances of inanimate
things to learn social affordances, that is
affordances whose existence requires the
presence of humans. Second, we use
the learned affordances for making multi-
step plans.
Specifically, an iCub humanoid platform is equipped with a perceptual system
to sense objects placed on a table, as well as the presence and state of
humans in the environment, and a behavioral repertoire that consisted of simple
object manipulations as well as voice behaviors that are uttered simple verbs.
After interacting with objects and humans, the robot learns a set of affordances
with which it can make multi-step plans towards achieving a demonstrated
goal.
Overview
Motor competences of robots operating in our
environments, is likely to remain inferior to ours on
most fronts in the near future. In order to complete
tasks that require competences beyond their abilities,
the robots will need to interact with humans towards
compensating these deficiencies.
Problem
Babies and small children can compensate the
lack of their physical competences through the
use of adults via social interaction. For instance,
for a child, the reachability of a candy on a high
shelf becomes possible only in the presence of
an adult, as long as he can “manipulate” him
properly using his social behaviors.
Inspiration
Background
The concept of affordances and its formalization in robotics.
Affordances
“The affordances of the environment are what it offers the
animal, what it provides or furnishes, either for good or ill.” [1]
Ecological Psychology:
Warren's (1984) stair-climbing experiments
affordances are explained as body-scaled
metrics.
Neuroscience:
Canonical and Mirror neurons, that are used
in motor actions, are also observed to be
active during perception.
Human-Computer Interaction:
How everyday things can be designed in a
way that what they afford is easily inferred by
the users.
Affordance formalization
social
entities
Behaviors
push_left
say_take
Effects
knockedknocked
Environment
regular
entities
grasped
disappeared
a=(e,b,f) [2]
say_give
taken
moved
leftmoved
right
got_attention
Training (interaction-data collection)
• iCub interacts with the environment
(object and/or humans) via the pre-
coded behaviors in its repertoire,
• During each interaction epoch behavior
id, initial perceptual features, change in
the perceptual features and effect id
are stored,
• For each behavior an SVM model is
trained which maps perceptual features
to effects these behavior can create.
• For each effect id an effect prototype is
generated.
Planning (for a given/demonstrated goal)
• iCub extracts initial features from the
environment,
• Initial features are fed to the SVM models to
obtain effect id predictions.
• For every effect id, effect prototypes are
fetched and added up with the initial features
to predict the final state of the environment if
this particular behavior is applied.
• Search the state space in breadth-first order
until finding a solution, or hitting the search
depth limit,
• Once a plausible plan is found, execute it
while watching if plan becomes obsolete. If
so, re-plan.
Training
Affordances are learned and effect prototypes are generated.
Planning
Learnt affordances & effect prototypes are used in planning.
Affordance Learning
• 413 triplets of (e,b,f) are collected from direct object
interactions, 150 triplets from social interactions with
human participants ( 4 female, 6 male between 20-40 ages, 4
undergrad, 2 grad students, 4 researchers with non-CS background).
• Dataset is first fed to the ReliefF feature selector
algorithm to extract relevant features.
• After filtering out the irrelevant features from the whole
dataset, for each behavior an SVM models is trained
with the modified dataset.
Effect Prototype Generation
• Effects are supervisedly labeled during each interaction
epoch with an effect id chosen from a set of effects such
as grasped, knocked, no-change, disappeared, moved
right, moved left, moved forward, pulled, sat down,
stood up, got attention, got closer.
• For each effect id,, corresponding samples in the
dataset unsupervisedly clustered on the mean-variance
space of each feature by using RGNG [3].
• RGNG finds four clusters naturally formed, and they are
labeled as no change(“0”), inconsistent change (“x”),
consistent positive change (“+”), consistent negative
change (“-”).
• Effect categories are compactly represented as a vector
of “0”, “+”, “-”, and “x” to represent changes in certain
feature value.
.
.
.
Nx2
Nx2
*
+
-
0
Effect1
Effectm
Supervised effect id
Feature Filtering
SVM model
for behaviori
Generated Effect Prototype
Unsupervised clustering of feature mean and variances.
Vertical axis:
E1: moved-right, E2: moved-left, E3: moved-forward,
E4: pulled and E5: disappeared.
Horizontal axis:
F1: position-x, F2: position-y, F3: human presence
F1, F2, F3 are detected as
• Consistently increasing (red upwards arrow), or
• Consistently decreasing (blue downwards arrow)
for effects E1, E2, E3, and E4.
INIT
Push
Right
Push
Forward
Pass
me
Take
PREDICT PREDICT PREDICT PREDICT PREDICT
Push
Right
Push
Forward
Pass
me
Take
PREDICT PREDICT PREDICT PREDICT PREDICT
Take
Push
Left
PREDICT
ii bb EOM :
Affordance Prediction
SVM mapping from object to
effect ids
Prototype Fetching
effect id to effect prototype
fi
(+, x, x, 0, -. +)
protofi
Initial
features
Σ
Estimated
features
Search for a behavior sequence to reach the given goal.
• We used the same affordance learning framework [2] developed for discovering the
affordances of inanimate things to learn social affordances, that is affordances whose
existence require the presence of humans.
• We demonstrated that our humanoid robot can interact with objects and with humans
(using voice commands) and from these interactions, it can learn what the objects as
well as the humans afford.
• We showed that the robot can ask for human assistance whenever it is required while
executing multi-step plans to satisfy demonstrated/given goals.
• Our approach towards learning the social affordances is in line with the findings that
affordances at different levels (intra-level and inter-level) share the same intrinsic
constraints and organizations [4].
Learning Social Affordances and Using Them for Planning
Kadir F. Uyanik, Yigit Caliskan, Asil K. Bozcuoglu, Onur Yuruten, Sinan Kalkan, Erol Sahin
KOVAN Research Lab, Dept. of Computer Eng., METU, Ankara, Turkey
Conclusion
[1] Gibson, J. (1986). The ecological approach to visual perception. Lawrence Erlbaum.
[2] Sahin, E., Cakmak, M., Dogar, M., Ugur, E., & Ucoluk, G. (2007). To afford or not to afford: A new formalization of
affordances toward affordance-based robot control. Adaptive Behavior, 15(4), 447–472.
[3] Qin, A. K., & Suganthan, P. N. (2004). Robust growing neural gas algorithm with application in cluster analysis. Neural
Networks, 17(8-9).
[4] Richardson, M., Marsh, K., & Baron, R. (2007). Judging and actualizing intrapersonal and interpersonal affordances.
Journal of experimental psychology: Human Perception and Performance, 33(4), 845.
Acknowledgement
This work is partially funded by the EU project ROSSI (FP7-ICT-216125) and by TU¨ BI˙TAK through
project 109E033. The authors Kadir F. Uyanik, Onur Yuruten and Asil K. Bozcuoglu acknowledges the
support of Turkish Science and Technology Council through the 2210 scholarship.
Figures in the Planning section are adapted from (Mutlu, 2009) with author’s permission.
References