SlideShare a Scribd company logo
1 of 138
Download to read offline
Graduate School for Health Sciences
University of Bern
Aphasia and dialogue:
What eye movements reveal about the processing of co-
speech gestures and the prediction of turn transitions
PhD Thesis submitted by
Basil Christoph Preisig
from Schwellbrunn, AR
for the degree of
PhD in Health Sciences (Neurosciences)
Thesis advisor
Prof. Dr. med. RenĂŠ MĂźri
Perception and Eye Movement Laboratory
Department of Neurology and Clinical Research, Faculty of Medicine, University of Bern
Thesis co-advisor
Prof. Dr. med. Jean-Marie Annoni
Neurology Unit
Department of Medicine, Faculty of Science, University of Fribourg and H-FR
Accepted by the Faculty of Medicine and the Faculty of Human Sciences of the
University of Bern
Bern, Dean of the Faculty of Medicine
Bern, Dean of the Faculty of Human Sciences
Liebe Helene
fĂźr deine Liebe, deine Geduld und deine Zuversicht
I
Table of contents
Abstract_______________________________________________________________________ III
Zusammenfassung ______________________________________________________________IV
Danksagung ___________________________________________________________________ V
1 Introduction_______________________________________________________________ 1
1.1 Dialogue _________________________________________________________________________ 1
1.2 Gesture__________________________________________________________________________ 2
1.2.1 Why do people gesture and what makes a movement a gesture? ______________________ 2
1.2.2 The classification of co-speech gestures ___________________________________________ 3
1.2.3 Neural correlates of gesturing ___________________________________________________ 5
1.3 Turn-taking _______________________________________________________________________ 7
1.4 Aphasia: A multimodal disorder ______________________________________________________ 9
1.4.1 Definition, phenomenology, and neuroanatomy ____________________________________ 9
1.4.2 Aphasia diagnosis and clinical syndromes_________________________________________ 11
1.4.3 Aphasia, apraxia, and gesture __________________________________________________ 13
1.4.4 Turn-taking in aphasia ________________________________________________________ 16
1.5 The assessment of visual exploration _________________________________________________ 17
1.5.1 The function and the recording of eye movements _________________________________ 17
1.5.2 Visual exploration of co-speech gestures _________________________________________ 19
1.5.3 Eye movement analysis and turn-taking __________________________________________ 20
2 Rationale and aims ________________________________________________________ 22
3 Empirical contribution______________________________________________________ 25
3.1 Synopsis of the studies_____________________________________________________________ 25
3.2 Original publications ______________________________________________________________ 28
3.2.1 Study 1 ____________________________________________________________________ 28
3.2.2 Study 2 ____________________________________________________________________ 48
3.2.3 Study 3 ____________________________________________________________________ 61
4 General discussion_________________________________________________________ 99
5 Outlook ________________________________________________________________ 110
II
6 References ______________________________________________________________ 112
7 Curriculum vitae Basil Christoph Preisig_______________________________________ 125
8 Complete list of publications _______________________________________________ 126
9 Declaration of Originality __________________________________________________ 128
III
Abstract
Two intriguing aspects of human communication are the occurrence of co-speech gestures
and the alternating exchange of speech acts through turn-taking. The present thesis aimed
to investigate both aspects by means of eye movement recordings in patients with post-
stroke aphasia. In particular, it was assessed whether patients’ linguistic deficits lead to
altered visual processing of co-speech gestures and whether aphasic impairments have an
impact on the capability to predict turn transitions.
The findings obtained from two studies imply that co-speech gesture processing is not
affected in aphasic patients. On the contrary, we found that patients benefit from multi-
modal information provided through the congruent presentation of speech and co-speech
gestures. However, aphasic patients’ focused less on the visual speech component (i.e.,
fewer fixations on the speaker’s face). This could be an indicator for a general deficit to
integrate audio-visual information causing aphasic patients to avoid interference between
the visual and the acoustic speech signal. In a third study, we addressed the frequency and
the precise timing of eye movements in relation to the turn transitions between speaking
actors. Patients with aphasia shifted their gaze less frequently according to the flow of the
conversation, although there was no difference with regard to the timing of their gaze shifts.
In this study, we could further show that higher lexico-syntactic processing demands lead to
a reduced gaze shift probability in aphasic patients. This finding might imply that patients
miss more opportunities to make their own verbal contributions when talking to their family
members.
Future studies should target gesture processing and turn-taking capabilities in aphasic
patients during face-to-face interaction. This is important in order to verify if the presented
findings can be generalized to patients’ everyday life.
IV
Zusammenfassung
Gestik und der gegenseitige Austausch von Information sind zentrale Bestandteile unserer
alltäglichen Kommunikation. Das Ziel der vorliegenden Dissertation war es, mittels
Augenbewegungsmessung, die zugrundeliegende Verarbeitung von Gestik bei Patienten mit
einer erworbenen SprachstĂśrung (Aphasie) zu untersuchen. Zudem wurde die Rolle der
Aphasie bei der Antizipation (i.e., Vorhersage) von Sprecherwechseln ĂźberprĂźft.
Die Ergebnisse aus zwei Studien deuten darauf hin, dass bei Patienten mit einer Aphasie die
Verarbeitung sprachbegleitender Gestik nicht beeinträchtigt ist. Im Gegenteil, Patienten
profitieren von der multimodalen Information durch sprachbegleitende Gestik mit
kongruentem Inhalt. Was sich hingegen zeigte, Aphasie-Patienten fokussieren weniger auf
die visuelle Komponente des Sprachsignals (i.e., sie blicken weniger ins Gesicht des
Sprechers). Dies kĂśnnte auf ein allgemeines Defizit bei der Integration von audio-visueller
Information hindeuten. Aphasie-Patienten vermeiden demzufolge Interferenz zwischen dem
visuellen und dem akustischen Sprachsignal. In einer dritten Studie wurden die Frequenz und
das präzise Timing von Blickbewegungen in Abhängigkeit von Sprecherwechseln ßberprßft.
Es zeigte sich, dass Aphasie-Patienten weniger häufig mit ihrem Blick den Sprecherwechseln
im Dialog folgten. Zudem sank die Wahrscheinlichkeit dafĂźr, dass Aphasie-Patienten ihren
Blick dem kommenden Sprecher zuwenden, mit steigender Komplexität des lexiko-
syntaktischen Inhalts. In Bezug auf das Timing der Blickbewegungen ergaben sich keine
Unterschied zu gesunden Kontrollprobanden. Patienten reagierten gleich schnell wie
Gesunde. Die Ergebnisse kĂśnnten darauf hindeuten, dass Patienten im Alltag oft den
richtigen Zeitpunkt verpassen, um selbst etwas zum Gespräch beizutragen.
KĂźnftige Studien sollten sich auf die visuelle Wahrnehmung von Gestik und das Verhalten
beim Sprecherwechsel im direkten Gespräch mit den Patienten konzentrieren. Dies ist
wichtig um zu ßberprßfen, ob sich die präsentierten Ergebnisse auch auf den Alltag der
Patienten Ăźbertragen lassen.
V
Danksagung
Ich mÜchte mich bei allen bedanken, die mich während meiner Zeit als Doktorand
unterstĂźtzt haben und dadurch den Abschluss meiner Dissertation erst mĂśglich gemacht
haben. Die letzten dreieinhalb Jahre waren eine sehr erfĂźllende Zeit. Ich vermisse sie jetzt
schon.
Zuerst mĂśchte ich mich bei allen Versuchspersonen bedanken, welche im Laufe meines
Doktorats an unseren Studien teilgenommen haben. Ohne ihren Einsatz wäre diese Arbeit
nicht mĂśglich gewesen.
Fßr das Gelingen war auch die gute Zusammenarbeit mit der Logopädie entscheidend. An
dieser Stelle herzlichen Dank an alle Logopädinnen der kognitiven restorativen Neurologie
des Inselspitals, des Neurologie- und Neurorehabilitationszentrum am Luzerner
Kantonsspital, und des Spitalzentrum Biel. Speziellen Dank fĂźr die Koordination und
Organisation von Terminen sowie das Durchfßhren zusätzlicher Diagnostik gebßhrt Sandra,
Susanne, Julia, Melanie, Marianne, Corina, Carmen, Gabriela, Monica und Nicole.
Reto und Gianni mĂśchte ich herzlich fĂźr das vorzĂźgliche Videomaterial danken, welches wir
dank ihrer UnterstĂźtzung im Fernsehstudio des Inselspitals aufnehmen konnten.
Lieber RenĂŠ, herzlichen Dank, dass du mir diese Doktorarbeit ermĂśglicht hast. In den letzten
dreieinhalb Jahren habe ich sehr viel von dir gelernt. Die Arbeit auf dem Projekt Aphasie und
Gestik war eine erfĂźllende und sehr spannende Herausforderung. Vielen Dank, dass du dir so
viel Zeit fĂźr mich genommen hast.
Lieber Jean-Marie, auch dir herzlichen Dank fĂźr die Betreuung meiner Arbeit und dafĂźr, dass
du dir sehr viel Zeit fĂźr meine Anliegen genommen hast.
Liebes Laborteam (Simone, NoĂŤmi, Rebecca, Rahel, Diego, Tobia, Tim und Dario), herzlichen
Dank fĂźr die unvergessliche Zeit. Liebe NoĂŤmi, liebe Simone, herzlichen Dank habt ihr mit
mir all die HĂśhen und Tiefen des Doktorandenlebens geteilt. NoĂŤmi mĂśchte ich herzlich fĂźr
die hervorragende Zusammenarbeit im Rahmen des Forschungsprojekts danken. Rebecca
und NoĂŤmi danke ich zudem herzlich fĂźrs Gegenlesen meiner Doktorarbeit. Dear Giuseppe,
thank you very much for all the support in engineering. You taught me that programming is
VI
actually a lot of fun. Zudem mĂśchte ich mich bei Klemens dafĂźr bedanken, dass ich immer
auf sein neuropsychologisches Fachwissen zurĂźckgreifen konnte. Vielen Dank auch an die
Gerontechnologie- und Rehabilitationsgruppe fĂźr die sehr gute Zusammenarbeit.
Liebe Mama, lieber Papa, herzlichen Dank fĂźr eure Liebe und Hingabe, fĂźr eure
Unterstßtzung während der Ausbildung, fßr eure Geduld und dafßr, dass ihr immerzu meine
Interessen geweckt und gefĂśrdert habt.
Lieber Moritz, vielen Dank bist du mir so ein treuer Freund und Bruder. Bei unseren
gemeinsamen Trainingseinheiten im Fitnesscenter konnte ich mich immer gut vom
Arbeitsstress befreien und neue Energie tanken.
Vielen Dank an all meine Kollegen, Verwandten und Bekannten, die mich durch diese Zeit
begleitet haben. Ganz besonders mĂśchte ich mich bei Martin und Mischa bedanken. Auf
eure Freundschaft kann ich seit dem Kindergarten zählen, das bedeutet mir sehr viel.
Liebe Helene, du bist das grosse GlĂźck meines Lebens. Herzlichen Dank fĂźr deine Geduld und
Liebe. Du gibst mir sehr viel Kraft.
Introduction
1
1 Introduction
Gestures and conversational exchange of speaking turns are universal features of human
communication and they occur in all cultural and linguistic backgrounds (Kita, 2009;
Levinson, 2006). In young children, gestural expression emerges before language acquisition
(Bates & Dick, 2002) and it has further been shown that the use of gestures predicts later
language development (Acredolo & Goodwyn, 1988; Rowe & Goldin-Meadow, 2009).
Moreover, humans seem to have a predisposition for turn-taking. Already at the age of 6
months, children follow the flow of the conversation with their eye gaze (Augusti, Melinder,
& Gredeback, 2010). Levinson (2006) argued that human beings have an inherited social and
interactive orientation. He refers to it as the “interaction engine” which might be the source
of human turn-taking. In the present thesis, I address the questions whether these entities of
human communication are affected in patients with an acquired language disorder. The
following chapter provides a broad introduction into the field of research. It starts with the
interactional infrastructure, introducing dialogue, gestures, and turn-taking. Subsequently,
aphasia is introduced as an acquired language disorder with a focus on its multimodal
aspects. In the final section of the chapter the assessment of visual exploration by means of
eye movement recordings is established as a valid and reliable technique to investigate co-
speech gesture perception and the on-going processing of speaking turns.
1.1 Dialogue
A dialogue can be defined as the conversational exchange between two or more
interlocutors. In its basic form, the dyadic dialogue involves two interlocutors, the speaker
and the addressee (i.e., the listener). Conversational exchange through dialogue is a
fundamental form of language use. Typically, it is the first modality of language acquisition
throughout children’s development and for some people, and even whole societies, it
remains their only modality of language use (Clark & Wilkes-Gibbs, 1986).
A dialogue constitutes a collaborative process in which the conversation partners need to
negotiate who is to talk at which time. It is believed that human beings rely on an inherent
Introduction
2
turn-taking system, which organises their opportunities to speak through social interaction
(Sacks, Schegloff, & Jefferson, 1974). During the speaker’s utterance the listener is signalling
the speaker by so called back-channel responses (head-nods, yes’s, and other interjections)
that he or she understood what has been said (Duncan, 1972; Goodwin, 1981; Schegloff,
1982). Moreover, the listener is integrating audio-visual information from the speech signal
but beyond that, he is also integrating the paraverbal and non-verbal behaviour made by the
speaker. The wide range of non-verbal behaviour also includes gestural movements of the
hands and arms which are probably the most studied expressive forms of nonverbal
behaviour (Kendon, 1980). The following two sections of this chapter provide an
introduction into the field of gesture and turn-taking research.
1.2 Gesture
We are discussing a phenomenon that often passes without notice, though
omnipresent. If you watch someone speaking, in almost any language and under nearly
all circumstances, you will see what appears to be a compulsion to move the hands and
arms in conjunction with the speech. (McNeill, 2000, p. 1)
1.2.1 Why do people gesture and what makes a movement a gesture?
The origin of gesturing in human interaction is still mysterious. It has been speculated
whether gesturing in humans is based on social learning, i.e., whether we gesture because
other people do so. Contrary to this assumption, it has been found that also congenitally
blind children produce gestures (Iverson & Goldin-Meadow, 1998). An alternative
interpretation is that speakers produce gestures in order to express themselves more clearly
to their interaction partners. However, it has been shown that people also gesture if there is
no visual contact with their interaction partner (e.g., conversation over the phone) or if the
interaction partner is blind (Iverson, Tencer, Lany, & Goldin-Meadow, 2000; McNeill, 1992).
Thus, gesturing does not only depend on the visual presence of an addressee but also on the
speaking process itself. Indeed, it has been shown that human gestures serve both
communication (Goldin-Meadow, Alibali, & Church, 1993; McNeill, 1992) and speech
production (Krahmer & Swerts, 2007; Krauss & Hadar, 1999). This means that gestures can
Introduction
3
supplement (e.g., shrug to express one’s uncertainty) or even substitute direct speech (e.g.,
victory sign), but they also facilitate lexical retrieval and complement speech prosody.
After having introduced its function, one should ask under which conditions movements are
perceived as gestures. Kendon (2004) suggested that gestures refer to all visible actions of
body parts when they are used as an utterance. Kendon and others also stressed the
communicative function of gestures (Kendon, 1994; Lausberg, 2011; McNeill, 1992;
Thompson & Massaro, 1994). In a recent study, Novack, Wakefield, and Goldin-Meadow
(2016) reported that hand movements are more likely to be perceived as gestures when
they are accompanied by speech. This form of gestural hand movements, from here on
referred to as co-speech gestures, builds one focus topic of the present thesis. When it is
later referred to co-speech gestures, I mean gestures which occur concomitant or
simultaneous with speech, irrespective of whether they convey communicative meaning. As
outlined above, gestures serve diverse functions. This does not only include communicative
meaning, but also the facilitation of speech production.
1.2.2 The classification of co-speech gestures
The first systematic classification of spontaneous co-speech gestures goes back to Efron
(1941/1972). His classification system provided the basis for later approaches to classify
different gesture types (Ekman & Friesen, 1969; Lausberg, 2011; McNeill, 1992; Sekine, Rose,
Foster, Attard, & Lanyon, 2013). The aim of gesture classification is to ascribe a
communicative function to a particular gestural movement. However, the classification is
difficult because in contrast to speech, gestural expressions are most often idiosyncratic.
Gestures are idiosyncratic because there exists no common lexicon which would define how
gestural movements need to be performed. Furthermore, co-speech gestures are hand
actions that are almost never used without language. Consequently, many forms of gestural
expression do not have a clear communicative function without the language context.
Therefore, people cannot unambiguously recognize the communicative intention of co-
speech gestures without speech (Krauss, Morrel-Samuels, & Colasante, 1991).
Introduction
4
In the conducted studies, we did not classify the communicative function of co-speech
gestures, although the reader should be briefly introduced to the most prominent
categories, which found their way into different classification systems. Iconic gestures
typically depict the shape of an object (iconographic gestures) or the trajectory of a
movement (kinetographic gestures). This category has a close relation to the verbal
utterance and its meaning is often redundant to it. Deictic gestures are pointing gestures
most often performed with the index finger which refer to a visible or an invisible object
(e.g., an image of an abstraction). Emblems are gestures which convey culturally
conventionalized and language-specific meaning (e.g., thumb up). Finally, beats or batonic
gestures represent rhythmic hand movements that go along with the pulsation of speech.
The typical beat is a short and quick flip of the hand or fingers, back and forth, or up and
down (McNeill, 1992).
Besides the classification of its communicative function, Kendon (1980; 2004) distinguished
in his seminal work different phases within the gestural movement: (1) preparation phase,
movement away from the resting position in preparation of the next phase; (2) stroke phase,
the main phase of a gesture unit when the movement excursion is closest to its peak; (3)
post-stroke hold, motionless phase which potentially occurs before and after a stroke; and
(4) recovery or retraction phase, during which the hands are going back to the resting
position. It should be noted that our studies on gesture perception (study 1 and study 2)
covered different phases of the gesture unit. As introduced in section 3.1, study 1 included
the presentation of isolated gestural movement on a trial-by-trial basis. Thus, the whole
gesture unit with all its phases was considered for the analysis. This lies in contrast to study
2, where we examined the visual exploration of spontaneous dialogue, thereby restricting
the analysis to the stroke phase of the gesture unit. The reason for this is that the stroke
phase is the most obvious part of the gestural movement which can be defined with the
highest reliability.
Introduction
5
1.2.3 Neural correlates of gesturing
There has been growing interest in the neural bases of gesturing because of conflicting views
regarding the relationship between speech and gestures. Whilst some researchers assume
that speech and gestures rely on different communication systems, others posit that speech
and gesture rely on a unitary communication system. According to the first view, speech and
gestures are tightly interacting, yet independent entities (Feyereisen, 1987; Hadar, Wenkert-
Olenik, Krauss, & Soroker, 1998; Levelt, Richardson, & La Heij, 1985). Regarding the latter
view, it has been proposed that gestures represent visible action as utterance (Kendon,
2004), or even thoughts and mental images (McNeill, 1992). Given the assumption that
speech and gesture rely on the same communication system, they would also be processed
by shared neural networks. Thus, gesturing and speaking would both be affected if their
common neural network is damaged due to a brain lesion. In the last ten years,
neuroimaging studies tried to identify functional neural networks responsible for gesture
perception and gesture production.
Indeed, several studies have reported that brain areas associated with language functions
also respond when people perceive co-speech gestures (Holle, Obleser, Rueschemeyer, &
Gunter, 2010; Straube, Green, Bromberger, & Kircher, 2011; for reviews see also Andric &
Small, 2012; Marstaller & BurianovĂĄ, 2014). Furthermore, it has been shown that the lateral
temporal cortex, an area which is part of the language system, responds more strongly if
speech is accompanied by co-speech gestures (Beauchamp, Lee, Haxby, & Martin, 2002).
Moreover, Özyürek, Willems, Kita, and Hagoort (2007) showed by means of
electroencephalography (EEG) that both verbal and gestural mismatch during sentence
processing elicit comparable event-related potentials, which thus suggests that the brain
integrates both types of information simultaneously. In addition, co-speech gestures also
activate regions outside the typical language areas in the parietal lobe and in the premotor
cortex (Dick, Goldin-Meadow, Hasson, Skipper, & Small, 2009; Green et al., 2009; Holle,
Gunter, RĂźschemeyer, Hennenlotter, & Iacoboni, 2008). These areas seem to be involved in
the processing of hand actions and action understanding (Andric & Small, 2012).
Introduction
6
Because of technical restrictions (i.e., in a contemporary MRI scanner participants cannot
freely move their arms and hands), neuroimaging studies on gesture production are scarce.
Marstaller and Burianova (2015) approached the problem presenting nouns that referred to
a tool (e.g., scissors), which is commonly used unimanual. The subjects were instructed to
produce a corresponding action verb, an action gesture, or a combination of both. The
authors reported that co-speech gesture production seems to be mainly driven by the same
neural network as language production. Additional evidence is provided by a study which
applied near-infrared spectroscopy. Oi, Saito, Li, and Zhao (2013) found gesture dependent
modulation of brain activity in the language network during story retelling.
Taken together, previous research indicates that the perception and the production of co-
speech gestures elicit brain activity in a neural network which overlaps with the neural
network for language processing. Figure 1, obtained from Dick, Mok, Beharelle, Goldin-
Meadow, and Small (2014), illustrates the overlapping activation found for gesture and
language perception.
Introduction
7
Figure 1. Activation peaks in the left inferior frontal and the posterior temporal cortex obtained from
studies which investigated language perception (i.e., how semantic ambiguity is resolved during
language comprehension) and from studies investigating gesture perception (i.e., how gestures
contribute to the resolution of semantic ambiguity). Source: (Dick et al., 2014, p. 902).
1.3 Turn-taking
Just as it is desirable to avoid bumping into people on the street, it is desirable to avoid
in conversations an inordinate amount of simultaneous talking (Duncan, 1972, p. 283)
Dialogue is characterized by the regular exchange of speaking turns between the
interlocutors. Duncan (1972) suggested that there is a regular mechanism in our culture for
managing the taking of speaking turns. This mechanism is described as turn-taking and it
allows the smooth and appropriate exchange of speaking turns (Goffman, 1963; Yngve,
1970). Sacks et al. (1974) reported in their seminal article the following important
observations concerning turn-taking; one party talks at a time, turn order is not fixed but
Introduction
8
varies, a speaker may select the next speaker (e.g., addressing somebody with a question),
or the turn can be taken by the next speaker at the next possible completion. Probably, their
most important observation was that between single turns (i.e., from one turn to the next),
there are no or only slight inter-speaker gaps or inter-speaker overlaps. However, the
minimal vocal response time in human communication would be around 200 ms (Fry, 1975;
Izdebski & Shipp, 1978). The production of a simple utterance during picture naming even
takes 600ms (Indefrey & Levelt, 2004). Therefore, the projection theory, which goes back to
Sacks et al. (1974), assumes that the next speaker is able to predict when the current
speaker will finish based on the recognition of the linguistic units within the turn. In contrast
to this view, representatives of the reaction or signal theory (Duncan, 1972; Kendon, 1967;
Yngve, 1970) assume that the next speaker reacts to a turn-yielding signal, which is provided
by the current speaker near the end of the utterance. Turn-yielding signals are described as
discrete behavioural cues, such as the intonation of the speaker’s voice, lengthening of the
final syllable, gestures, stereotypic expression (e.g., you know…), or syntactic cues (e.g., the
completion of a grammatical clause). Heldner and Edlund (2010) did an extensive analysis of
telephone and face-to-face interactions, in 370 pairs of Dutch, Swedish and Scottish English
speakers. Their findings suggest that turn-taking is not as precise as it was claimed by the
projection theory showing that overlaps occurred in 40% of all between-speaker intervals
(i.e., inter-speaker gaps and inter-speaker overlaps). Moreover, the proportion of between-
speaker intervals that occurred within less than 200ms was 55% to 59%, which would speak
against the reaction theory. The proportion of the between-speaker intervals, which
involved a gap or an overlap long enough to consider a reactive response, was only 41% to
45%. The authors concluded that they could neither rule out projected responses, nor
reactive ones.
Introduction
9
1.4 Aphasia: A multimodal disorder
1.4.1 Definition, phenomenology, and neuroanatomy
Aphasia is an acquired language disorder which occurs as a consequence of brain damage to
the language dominant hemisphere. The incidence of aphasia in the Swiss population is 0.43
per 1000 citizens, which corresponds to 3440 new cases per year. The estimated number of
patients who are living with aphasia in Switzerland is about 5000 (Koenig-Bruhin, Kolonko,
At, Annoni, & Hunziker, 2013). The disorder generally affects different capabilities in all
language modalities, i.e., speaking and listening (language comprehension), reading and
writing. The signs and symptoms of language impairment depend on the localization of
circumscribed brain lesions. Therefore, aphasia has attracted the interest of different
disciplines (e.g., neurology, psychology, linguistics, computational science, or philosophy),
because it provides a model to test their theories of mind and brain (Damasio, 1992).
Aphasia is a central disorder of language processing and has to be distinguished from motor
speech disorders (e.g., dysarthria), which are characterized by a poor articulation. In
contrast, aphasia affects whole components of the language system (phonology, lexicon,
syntax, and semantics).
Spontaneous speech in aphasic patients is characterized by word finding difficulties, poor
wording (mistakes in word choice), and morpho-syntactic mistakes in the sentence
structure. Word finding deficits occur in all variants of the disorder. Patients make longer
pauses which they fill with interjections (e.g., uh, er, or um), they constantly repeat
utterances (i.e., perseverations), use empty phrases, or they completely abort unfinished
sentences. Aphasic patients also produce unintended words that are semantically related
(e.g., fridge for toaster) or unrelated to the intended word, which is also referred to as
semantic paraphasia. Patients with aphasia exchange speech sounds within the word
(phonological paraphasia), or they modify the word form until it becomes unrecognizable
(neologisms). The morpho-syntactic structure of their sentences can be described as either
agrammatical, or as paragrammatical. On the one hand, patients who show an agrammatic
sentence structure omit function words and inflection forms. They have a reduced
Introduction
10
availability of verbs, their sentences are shorter and the syntax is simplified. Moreover, they
have difficulties with the word order. On the other hand, patients with paragrammatism use
excessive sentence structures. Their sentences are complex and marked by erroneous
doublings of parts of the sentences (Weniger, 2012).
In most cases, aphasia is the consequence of brain damage to the left cerebral hemisphere.
This is because language functions tend to be lateralized to the left hemisphere in most
right-handed (90%) and left-handed (70%) individuals (Knecht et al., 2000). Aphasia is
commonly caused by a cerebrovascular insult (80%), but it can occur as a result of virtually
any other neurological incident like traumatic brain inquires, brain tumours, cerebral
abscess, or progressive brain diseases (e.g., dementia) (Koenig-Bruhin et al., 2013). Two
types of cerebrovascular insult can be distinguished: ischemic stroke (85%) and hemorrhagic
stroke (15%) (Hickey, 2003). In ischemic stroke, the blockage of a blood supplying artery
causes a shortage in oxygen in the supplied brain area which further leads to the death of
the neurons in the affected brain tissue. In hemorrhagic stroke, an intracerebral bleeding
causes the cell necrosis in the affected brain tissue.
The first functional-anatomical models of aphasia can be traced back to the pioneer case
studies conducted by Paul Broca (1861) and Carl Wernicke (1874). Broca treated a patient
who could only produce the syllable tan. Based on the autopsy of this patient Broca
concluded that the crucial area for speech production has to be located in the inferior frontal
gyrus. Wernicke connected the evidence presented by Broca with his own observation that
patients with brain lesions in the superior temporal gyrus produce fluent but meaningless
speech. Wernicke suggested that word meaning and the motor plans for articulation are
represented in distinct areas in the inferior frontal and the superior temporal lobe,
respectively. Wernicke initially believed that the two areas are connected over association
fibers fibrae propriae, which run through the insular cortex. Later, he accepted Constantin
von Monakow’s finding that the arcuate fasciculus is connecting Wernicke’s with Broca’s
area (Catani & Mesulam, 2008; Geschwind, 1965; Krestel, Annoni, & Jagella, 2013).
Contemporary models of speech processing (Friederici, 2011; Hickok & Poeppel, 2007;
Vigneau et al., 2006) assume a dual route model consisting of ventral and dorsal pathways
Introduction
11
(see Fig. 2). The ventral pathways reach from the auditory cortex to the temporal pole and
over the uncinate fasciculus and the extreme fiber capsule to the inferior frontal lobe and
the frontal operculum. The ventral pathways are needed for the association of the auditory
speech signal with conceptual semantic knowledge (e.g., language comprehension). The
dorsal pathways reach from the temporo-parietal junction over the longitudinal fasciculus
and the arcuate fasciculus to the premotor cortex and the inferior frontal gyrus. The dorsal
pathways are thought to serves as an integrative network for sensory-motor processes (e.g.,
verbal repetition) (Friederici, 2011).
Figure 2. It illustrates schematically the structural connections within the language network. The
dorsal pathways (longitudinal fasciculus, arcuate fasciculus) reach from the superior temporal lobe to
the premotor cortex and to Broca’s area. The ventral pathways (extreme fiber capsule, uncinate
fasciculus) reach from the temporal pole to Broca’s area and to the frontal operculum (FOP),
respectively. Source: (Friederici, 2011, p. 1360).
1.4.2 Aphasia diagnosis and clinical syndromes
The Aachener Aphasia Test (AAT) is a widely used test for aphasia diagnosis, severity
judgment, and syndrome classification in German-speaking populations (Huber, Poeck, &
Willmes, 1984). This test battery is based on well-defined linguistic criteria, fulfils
Introduction
12
psychometric requirements, and provides validated and standardized test scores for the
target population.
The classification of different aphasia syndromes, which is also part of the AAT, is based on
characteristic symptoms in spontaneous speech. After cerebrovascular aetiology, i.e., in
patients after an ischemic stroke to the middle cerebral artery, it is an old tradition in
aphasiology to differentiate four standard syndromes: anomic aphasia, Broca’s aphasia,
Wernicke’s aphasia, and global aphasia (Weniger, 2012). The leading symptom for anomic
patients is a word finding deficit. Broca’s aphasics typically show agrammatic speech
production. Wernicke’s aphasics show paragrammatic sentence structures and global
aphasia is characterized by speaking automatisms. The caveat of the syndrome classification
is that many studies found that the lesion location often does not correspond well with the
syndrome complex (Dronkers & Larsen, 2001; Penfield & Roberts, 2014). Therefore,
syndrome classification has become less important in contemporary research and in the
current clinical practice. Nevertheless, the AAT is still a widely used instrument in research
and in clinics for aphasia diagnosis and the assessment of aphasia severity.
The AAT consists of an evaluation of spontaneous speech and of five subtests (Token Test,
Naming, Comprehension, Repetition, and Written Language). For the studies included in the
present thesis we decided to select two subtests of the AAT, the Token Test and the Written
Language Test. Willmes, Poeck, Weniger, and Huber (1980) demonstrated that the
discriminative validity of the two subtests is as good as the discriminative validity of the
whole test battery. As its name implies, the Token Test consists of a selection of tokens
(circles and squares) in two different sizes and five different colours. The participant is
instructed to point on the token which correspond with the verbal command (e.g., show me
the big red circle) given by the experimenter. The Token Test was originally introduced as a
sensitive method to detect aphasic impairments of auditory language comprehension (De
Renzi & Vignolo, 1962). Interestingly, later studies found that the Token Test is equally
powerful in detecting patients with Broca’s aphasia, as it is in detecting patients with
Wernicke’s and global aphasia (Cohen, 1976; Orgass & Poeck, 1966; Poeck, Kerschensteiner,
& Hartje, 1972). The Written Language Test consists of three parts: reading of single words
Introduction
13
and sentences, a part where the participant has to compose words out of letters and
compound words out of single words, and a writing from dictation part.
1.4.3 Aphasia, apraxia, and gesture
The above-mentioned findings from neuroimaging studies (see section 1.2.3) demonstrate
that gesture and language processing elicit brain activity in overlapping neural networks.
These findings suggest that gesture and language rely on the same, or at least on shared,
neural processing.
Kimura (1973) observed that right-handers, who are thought to process language mainly
with their left brain hemisphere, predominantly use the right hand when gesturing. The
author concluded that there exists a common control system for free movements and
speaking, which is lateralized to the left brain hemisphere for most people. Kimura’s
conceptualisation was later corroborated by two studies in aphasic patients (Cicone,
Wapner, Foldi, Zurif, & Gardner, 1979; Glosser, Wiener, & Kaplan, 1986). Cicone et al. (1979)
compared co-speech gestures during spontaneous communication in two patients with
Broca’s aphasia with those produced by two patients with Wernicke’s aphasia. The authors
reported that the gesture production closely matched the speech output. In the patients
with Wernicke’s aphasia, the quantity of speech and gesture production resembled those of
healthy controls, but it was unstructured and difficult to understand. Patients with Broca’s
aphasia showed reduced speech and gesture production, but the clarity of the produced
gestures was higher. According to the authors’ opinion, the clarity of the gestures even
surpassed those of healthy controls. In a later study, Glosser and colleagues (1986) found a
negative correlation between language impairments and gestural complexity in aphasic
patients. Patients with more severe impairments produced less complex gestures. Cicone et
al. (1979), Glosser et al. (1986), and later also McNeill (1992) concluded that the
communicative competence to use co-speech gestures has to be affected in aphasic
patients, because gestures and speech rely on a same communication system.
This view has been challenged by other studies which found that patients with aphasia
improve their communication if they use gestures (Behrmann & Penn, 1984; Herrmann,
Introduction
14
Reichle, Lucius-Hoene, Wallesch, & Johannsen-Horbach, 1988; Lanyon & Rose, 2009;
Rousseaux, Daveluy, & Kozlowski, 2010). For example, Herrmann et al. (1988), who analysed
the communication between aphasic patients and their relatives, found that the patients
used more speech-replacing gestures than their interlocutors. In another study, Behrmann
and Penn (1984) did not find a relationship between the severity of language impairments
and gesture capabilities as it was suggested by Glosser el al. (1986). Furthermore, Lanyon
and Rose (2009) reported that even severely affected patients with almost no speech
production capabilities were able to use communicative gestures.
A potential source of inconsistency is that the above cited studies did not assess the co-
occurrence of apraxia. Apraxia is an impairment of the ability to perform skilled, purposive
limb movements (Ochipa & Gonzalez Rothi, 2000). Similar to aphasia also apraxia is caused
by left hemispheric brain lesions (Liepmann, 1905). Previous research found that left-sided
lesions can cause apraxia without aphasia or the reverse dissociation (Kertesz, Ferro, &
Shewan, 1984). However, aphasia is far more common to occur without apraxia, than vice
versa (Goldenberg, 2008). Patients with apraxia are impaired in gesture imitation (Buxbaum,
Kyle, & Menon, 2005; Goldenberg, 2008) and in performing pantomimes on verbal
command (Vanbellingen et al., 2010). Borod (1989) reported that praxis skills are positively
correlated with spontaneous gestural communication in patients with limb apraxia. In
contrast, Feyereisen, Barter, Goossens, and Clerebaut (1988) found a negative correlation
between the apraxia test score and the use of gestures in aphasic patients. Patients with
more severe apraxia, who also exhibited higher aphasia severity, produced more gestures. In
a more recent study, Hogrefe, Ziegler, Weidinger, and Goldenberg (2012) could show that
apraxia, rather than aphasia, affects the comprehensibility of the gestural expression. For
this purpose, videotaped cartoon narratives obtained from patients with severe aphasia and
different levels of apraxia severity were presented without sound to naĂŻve raters. For each
narration, the raters, who were familiar with the original cartoons, were asked to indicate,
which story had been told and which aspects of the story they recognized. The authors
reported positive correlations between the identification rate and the ratio of the
Introduction
15
recognized features from the narrations with the apraxia test scores, but not with the scores
from the aphasia test battery (i.e., subtests of the AAT).
Taken together, previous research does not present a clear-cut relationship between
language impairments in aphasic patients and their ability to produce co-speech gestures.
Although, there seems to be a vague connection between apraxia severity and the ability to
perform communicative gestures, this relationship is probably restricted to patients with
severe aphasia. Since perception and action (e.g., co-speech gesture production) are tightly
related processes (Rizzolatti, Fadiga, Gallese, & Fogassi, 1996), it is important to also
illuminate gesture perception in aphasic patients. Hereby, the fundamental question is
whether the perception of co-speech gestures improves language comprehension in aphasic
patients. Records (1994) investigated the contribution of referential gestures on speech
perception in aphasic patients. In her study, she presented the information either auditorily
(target word), visually (referential gesture towards target picture) or as a combination of
both modalities (target word and referential gesture). The authors found that patients with
lower language comprehension abilities relied more on the gestural information. Other
studies combined gesture and lexical learning therapy in mild (Kroenke, Kraft, Regenbrecht,
& Obrig, 2013) or gesture and naming therapy in severe cases of aphasia (Marshall et al.,
2012). In these studies, patients perceived a target word or picture together with a
referential gesture. In the following parts of the experiments, they were requested to repeat
the word or to name the picture while imitating the gesture. The results of these studies
indicate that lexical learning and naming, trained together with a referential gesture, is
improved.
In a nutshell, there is evidence that the perception of co-speech gestures may facilitate
speech perception and thus ultimately lead to deeper encoding of information in aphasic
patients as it has been shown in healthy participants (Cohen & Otterbein, 1992; Pierre
Feyereisen, 2006). However, in contrast to previous research on gesture production, it has
not been studied whether there are differences concerning the visual perception of co-
speech gestures between aphasic patients and healthy controls.
Introduction
16
1.4.4 Turn-taking in aphasia
Previous studies on the interactional structure of aphasic conversation (Holland, 1982; Prinz,
1980; Schienberg & Holland, 1980; Ulatowska, Allard, Reyes, Ford, & Chapman, 1992)
showed that the patients still adhere to the basic rules of turn-taking (e.g., only one speaker
at a time) that were suggested by Sacks et al. (1974)
Schienberg and Holland (1980) found that turn-taking behaviour remained intact. They
analysed the conversation between two aphasic patients showing that patients even used
repair strategies for turn-taking errors when both speakers were talking at the same time.
The authors concluded that a naĂŻve observer, who is not familiar with the spoken language,
would not even notice patients’ language production deficits. Ulatowska and colleauges
(1992) went one step further by systematically comparing dyads between aphasic patients
and healthy controls in a role-play set up. The participants were engaged in a conflict about
dissatisfaction with a product or a service between a customer and a salesperson. The
authors analysed a total of 18 dyads: four healthy control-healthy control, eight healthy
control-aphasic patient, and six aphasic patient-aphasic patient dyads. Interestingly, the
authors found that aphasic patients behaved comparable to healthy controls at the
discourse level in terms of turn types and the range of speech acts.
A current model of turn-taking assumes that listeners try to determine the speaker’s
intentions to predict the unfolding of the speaker’s utterance (Pickering & Garrod, 2013). In
other words, listener build a representation of what they believe their partner will say in
order to plan their own contribution. According to this model, listeners have to make
predictions about the content and about the timing of the speaker’s utterance. The authors
suggested that listeners rely on language comprehension to make predictions about the
content and on the speaker’s speech rate for the exact timing. Language comprehension
involves the extraction of phonological, syntactic, and most importantly semantic
information. However, the processing of semantic, syntactic, and phonological information is
disturbed in aphasic patients (Butler, Ralph, & Woollams, 2014; Caplan, Waters, Dede,
Michaud, & Reddy, 2007; Caramazza & Berndt, 1978; Jefferies & Ralph, 2006). One
consequence of their aphasic impairments might be that patients have difficulties to predict
Introduction
17
the unfolding of the speaker’s utterance and thus have problems to accurately time their
own speech act. Indeed, Schienberg and Holland (1980) reported that the only difference
among dyads between healthy participants and those between aphasic patients was the
length of the inter-speaker gaps. Hence, longer inter-speaker gaps in dyads between aphasic
patients can be taken as an indication for timing difficulties. Another consequence of their
linguistic impairments could be that aphasic patients miss opportunities for their own
contributions (i.e., the turn relevance place) because they have difficulties to recognise the
speaker’s intentions. Ulatowska et al. (1992) found some indications for this assumption
showing that the rate of conversational exchange (i.e., number of turns per minute) was
slightly higher for the healthy control-healthy control dyads. In sum, even if previous
research provides some indications, the ability to make correct predictions about upcoming
turn transitions has not been systematically studied in aphasic patients.
1.5 The assessment of visual exploration
1.5.1 The function and the recording of eye movements
Humans constantly move their eyes when capturing a visual scene because the degree of
acute vision covered by the human eye is only about two degrees of visual angle. The reason
for this is the distribution of photoreceptor cells across the human retina. The retina is a
light-sensitive layer of tissue which contains two types of photoreceptor cells, the rods and
the cones. The rods are very light sensitive photoreceptor cells which are important for the
scotopic vision (i.e., vision by darkness). The cones are responsible for the colour vision and
the visual acuity (photopic vision). The fovea is the region on the retina that enables acute
vision. This area contains only cones and barely any blood vessels, and covers approximately
two degrees of the retina. Hence, the main function of eye movements during visual
exploration is the alignment of the fovea with the objects of interest. Fixations are phases of
spatially relatively stable eye gaze positions. This is the time when visual processing takes
place (Ilg & Thier, 2003). Saccades are quick movements of both eyes between two fixations.
The purpose of saccades is to align the fovea with a visual target. During saccades the
sensitivity of visual processing is reduced due to the phenomenon of saccadic suppression
Introduction
18
(Matin, 1974; Zuber & Stark, 1966). In sum, visual exploration consists of a continuous
alternation between fixations and saccades.
Contemporary eye tracking systems allow a precise estimation of the eye gaze position in
space and time. The eye tracking device used for the current studies is a binocular infrared
eye-tracker (RED, SensoMotoric Instruments GmbH, Teltow, Germany) with a temporal
resolution of 250 Hz and a maximal spatial resolution of 0.03°. This device contains two
infrared cameras which detect the pupil and the corneal reflection of the eye. Based on this
information the system is able to estimate the exact pupil position in relation to reference
points, which are presented on the screen during the calibration procedure. A major
advantage of this system is that is does not require a fixed headrest. This is very convenient
for the assessment of eye-movements in stroke patients.
In their seminal paper, Land and Hayhoe (2001) demonstrated that the eyes usually fixate
the manipulated object during activities of daily living (e.g., preparing a cup of tea).
Sometimes, the eyes moved on to the next object in the sequence before the completion of
the preceding action. Others found that saccadic eye movements occur before hand
movements (Angel, Alston, & Garland, 1970; MĂźri, Kaluzny, Nirkko, Frosch, & Wiesendanger,
1999). This suggests that eye movements are controlled top-down during complex tasks,
which afford the parallel processing of information (e.g. the position of the cup in relation to
the pot). In humans the frontal eye field (FEF) is an essential structure for the voluntary
control of eye movements during visual exploration (MĂźri & Nyffeler, 2008). Fortunately,
oculomotor deficits, which occur after a lesion to the cortical oculomotor regions (e.g., FEF),
seem to recover very rapidly in patients with brain damage (Leigh & Zee, 2015). For instance,
Schiller, True, and Conway (1980) reported that only a bilateral lesion of the FEF and an
additional lesion to the superior colliculus result in a persisting impairment of saccade
parameters. Therefore, the analysis of eye movements has proven to be a valid tool in order
to study the processing of linguistic information not only in healthy participants (Tanenhaus,
Spivey-Knowlton, Eberhard, & Sedivy, 1995), but also in patients with brain damage (Forde,
Rusted, Mennie, Land, & Humphreys, 2010; Sheppard, Walenski, Love, & Shapiro, 2015;
Thompson & Choy, 2009). Thus, eye movement analysis is a compelling method to gain new
Introduction
19
insights into the parallel processing of speech and co-speech gestures in aphasic patients
who are in a sub-acute to chronic state.
1.5.2 Visual exploration of co-speech gestures
Typically, studies on gesture perception included experimental conditions where a sentence
was presented either with, or without a representational gesture (Cohen & Otterbein, 1992;
Feyereisen, 2006; Records, 1994). These studies demonstrated that sentences paired with
co-speech gestures were better recalled, a phenomenon, which is also referred to as the
mnemonic effect of gestures. However, there are only few studies which investigated the
visual exploration of gestural movements by means of eye movement recordings.
In their pioneer work, Gullberg and Holmquivst (1999) let participants, who were wearing a
head-mounted eye tracker, listen to the cartoon narrations made by another study
participant. They found that the speaker’s face was fixated much more often (90-95%) than
the gestures (2-7%). These findings were later confirmed in a similar study by Beattie,
Webster, and Ross (2010). Gullberg and Holmqvist (1999) also reported two conditions
under which participants seem to be more likely to fixate the speaker’s gestures: First,
gestures produced in the vertical periphery were more often fixated than gestures
performed centrally, and second, if the speakers fixated their own gestures, the listeners
tended to follow them with their gaze. In a follow-up study, Gullberg and Holmqvist (2006)
found comparable visual exploration of co-speech gestures in a face-to-face and a video
condition. This finding corroborates the empirical validity of video guided eye tracking.
In a pilot study for the aphasia and gesture project, which also includes the work conducted
for the present thesis, our group was the first to study visual exploration of gestural
movements by means of eye movement recordings in stroke patients (Vanbellingen et al.,
2015). In this study, short videos of communicative and meaningless gestures were
presented without speech while eye movements were recorded in patients with left
hemispheric brain damage (LHD). Vanbellingen and colleagues (2015) could show that in
comparison to healthy controls, LHD patients fixated the face and the gesturing hand less
frequently during the visual exploration of tool-related and emblematic gestures. Moreover,
Introduction
20
they found that fixation duration on tool-related gestures was significantly correlated with
their imitation performance.
1.5.3 Eye movement analysis and turn-taking
In the past few years, a new experimental paradigm has been established to study the
cognitive process of turn-taking from a third person perspective (Holler & Kendrick, 2015).
This paradigm requires a non-involved observer to watch video vignettes of pre-recorded
dialogue while their eye movements are measured. The analysis of the precise timing of eye
movements in relation to the turn transitions between speaking actors allows some
conclusions about the processing and the underlying mechanisms of human turn-taking.
There are studies showing that already 6 to 12 months old children shift their gaze according
to the flow of the conversation between the current and the next speaker (Augusti et al.,
2010; Keitel, Prinz, Friederici, von Hofsten, & Daum, 2013). Furthermore, this paradigm has
been applied to assess whether non-involved observers are able to predict upcoming turn
transitions during video presentation. Similar to the minimal vocal response time introduced
in section 1.3, previous research suggests that it takes at least 200 ms to plan and execute a
saccadic gaze shift (Becker, 1991; Salthouse & Ellis, 1980; Westheimer, 1954). Hence, a gaze
shift that occurs within the first 200 ms after the completion of a speaker’s turn has been
planned prior to its end. This means that gaze shifts, which are initiated within this time
window (see, Fig. 3), can be taken as indicators for the human ability to project turn
transitions, as it was suggested first by Sacks and co-workers (1974)
Introduction
21
Figure 3. Illustration of the crucial time window for the start of projected gaze in the third person
eye-tracking paradigm.
Unfortunately, most studies analysed the timing of the observer’s gaze shifts in relation to
the beginning of the next speaker’s speech (Keitel & Daum, 2015; Keitel et al., 2013; von
Hofsten, Uhlig, Adell, & Kochukhova, 2009). This means that these studies did not take into
account the length of the inter-speaker gap. For instance, Keitel and colleagues presented
video material including inter-speaker gaps lasting on average between 860 and 930 ms. As
introduced in section 1.3, inter-speaker gaps longer than 200 ms indicate that the next
speaker reacted to rather than projected the end of the current turn. Likewise, the gaze shift
from a passive observer, which occurs 200 ms after the completion of the current turn, but
still within the inter-speaker gap, has to be considered as a reactive gaze shift. This even
applies to gaze shifts which occur before the next speaker starts to speak. In the current
work, we tried to address this methodological pitfall by analysing turn end projection with
respect to the end of the current turn.
Rationale and aims
22
2 Rationale and aims
The aim of the present thesis was to investigate co-speech gesture perception and the
processing of speaking turns in patients suffering from post-stroke aphasia. For this purpose,
three different studies were conducted on this topic. The first study focused on the visual
perception of single co-speech gestural movements, which were presented on a trial-by-trial
basis. In this study, it was investigated to which extent aphasic patients can benefit from
multi-modal information provided by speech and co-speech gestures. Moreover, it was of
interest how long aphasic patients would fixate the gesturing hands in comparison to
healthy control participants. In the second study, it was assessed how aphasic patients
perceive co-speech gestures that occur during dyadic conversations. Do co-speech gestures
of the speaker have an impact on the gaze direction (towards the speaker or the listener)
and further on the amount of fixation of other body parts? Furthermore, we investigated
whether aphasic patients develop compensatory visual exploration strategies in order to
overcome impairments of auditory language comprehension. In the third study, we analysed
the frequency and the precise timing of turn transition dependent eye movements during
dialogue observation. The rational of this study was to investigate whether the gaze shift
behaviour in aphasic patients reveals indications for difficulties with the prediction of
upcoming turn transitions. On this purpose, it was further examined whether the lexico-
syntactic complexity and the intonation during the speaker’s turn would predict gaze shift
probability and latency on the following turn transition.
Previous research suggests that the perception of co-speech gestures during treatment may
have positive effects on naming (Marshall et al., 2012) and lexical learning (Kroenke et al.,
2013). Moreover, it has been found that patients with impaired language comprehension
seem to rely more on the gestural information (Records, 1994). Supposing that the
perception of co-speech gestures improves language comprehension in aphasic patients, it is
relevant to study whether aphasic patients and healthy controls look at gestures differently.
The analysis of visual exploration by means of eye movement recordings provides insights
for the understanding of gesture processing. However, the visual exploration of co-speech
Rationale and aims
23
gestures has not been studied in aphasic patients, most probably due to technical
restrictions. The analysis of fixations on moving targets (e.g., gestural movements) requires
software which allows the modelling of dynamic regions of interest.
Another aspect of social interaction which has only been sparsely studied is turn-taking in
aphasic patients. A series of older studies suggests that the rate of the conversational
exchange (i.e., number of turns per minute) seems to be comparable in aphasic patients and
healthy participants (Holland, 1982; Prinz, 1980; Schienberg & Holland, 1980; Ulatowska et
al., 1992). Current research indicates that healthy participants can predict the unfolding of a
speaker’s turn in order to plan their own speech act (Garrod & Pickering, 2015; Holler &
Kendrick, 2015). According to the introduced model of turn-taking (see section 1.4.4), it is
likely that language comprehension impairments in aphasic patients hinder accurate
predictions of upcoming turn transitions.
Open questions that need to be addressed:
• Is additional non-verbal information provided through co-speech gestures beneficial
for comprehension in aphasic patients? (study 1)
• Are there different ways how aphasic patients perceive co-speech gestures? For
instance, do they deploy more attention to the gestural movements? (study 1 and
study 2)
• How do co-speech gestures modulate visual perception during the observation of
dialogue? Do co-speech gestures have an impact on gaze direction (towards the
speaker or the listener)? (study 2)
• Is there a difference with respect to the frequency and the timing of turn transition
related gaze shifts between aphasic patients and healthy controls? Does the
performance depend on the lexico-syntactic complexity and the intonational
information provided during video observation? (study 3)
Rationale and aims
24
Limitations
The studies included in the present thesis focused on the perception of co-speech gestures
(study 1 and study 2). Therefore, no conclusions can be made with regard to gesture
production abilities in aphasic patients. Furthermore, we did not distinguish between co-
speech gestures which convey communicative meaning and co-speech gestures which only
facilitate language production of the speaker (e.g., batonic gestures). Similar to the first two
studies, study 3 investigated turn-taking from a third person perspective by means of video
stimuli. Therefore, the results allow only limited conclusions about the behaviour during
everyday life.
Empirical contribution
25
3 Empirical contribution
3.1 Synopsis of the studies
Study 1: Comprehension of co-speech gestures in aphasic patients: An eye movement
study
Study 1, published in PLoS one, investigated the influence of co-speech gestures on
comprehension and visual perception in aphasic patients. Twenty patients with aphasia after
left-hemispheric stroke and 30 healthy control participants watched short video clips which
depicted an actress who performed a co-speech gesture. During video presentation, the eye
movements of the participants were recorded by means of a remote eye-tracking system,
which was attached underneath the screen. In the main experiment, congruence between
speech and co-speech gesture was manipulated under three experimental conditions: 1)
congruent meaning of speech and gesture content (congruent condition); 2) incongruent
meaning of speech and gesture content (incongruent condition); and 3) speech paired with a
meaningless gesture (baseline condition). After each video clip comprehension was assessed
by a force-choice decision task where participants had to indicate whether speech and
gesture matched. The results show that co-speech gesture valence has a significant impact
on comprehension in aphasic patients. As expected, aphasic patients had a lower
performance in the baseline condition. In this condition, the co-speech gestures had no
valence, because they represented meaningless hand movements. Therefore, the resulting
difference between aphasic patients and healthy controls can be ascribed to speech
processing deficits (e.g., language comprehension) which are typical for aphasic patients. In
each group, the impact of co-speech gesture congruence was analysed in relation to the
decision task performance in the baseline condition. Interestingly, task performance in
aphasic patients decreased in the incongruent condition, while the congruent condition led
to a significant increase in performance. In contrast to the patient group, performance in
healthy controls was only modulated by the incongruent condition, where participants
displayed a moderate decrease in task performance. Visual exploration analysis revealed
Empirical contribution
26
that meaningless gestures attracted more attention than meaningful gestures, and
incongruent gestures tended to attract more attention than congruent gestures.
Furthermore, patients with aphasia, in comparison to healthy participants, fixated less
frequently the face of the actress across all experimental conditions.
Study 2: Perception of co-speech gestures in aphasic patients: A visual exploration study
during the observation of dyadic conversations
Study 2, published in Cortex, examined the influence of co-speech gestures on the visual
exploration behaviour during video observation. Sixteen patients with aphasia and 23
healthy controls watched videos of naturalistic dyadic conversations while their eye
movements were tracked by an infrared eye-tracking system. In this study, it was analysed
how the distribution of visual fixations was modulated by the factors co-speech gesture
(present and absent), gaze direction (to the speaker or to the listener), and region of interest
(ROI), including hands, face, and body. Our results show that co-speech gestures in the video
modulated gaze direction of the observer towards the speaking actor in the video. In
particular, both aphasic patients and healthy controls fixated more the speaker’s hands if he
or she was gesturing and less the listener’s face. We expected that aphasic patients would
try to gain additional input from the articulatory movements of the speaker’s face or the co-
speech gestures made by the actors in the video. Against our assumption we found that
patients with aphasia neither fixated the speaker’s face, nor the gesturing hands more
frequently than healthy controls. In contrast, our results show that, independent of co-
speech gesture presence, aphasic patients fixate less frequently the speaker’s face. This
altered visual exploration strategy may be the result of a deficit in processing audio-visual
information. This deficit may cause aphasic patients to avoid interference between the visual
and the auditory speech signal.
Study 3: Eye gaze behaviour at turn transition: How aphasic patients process speakers’
turns during video observation
Study 3, accepted for publication in the Journal of Cognitive Neuroscience, investigated the
frequency and the latency of turn transition related gaze shifts in aphasic patients. Sixteen
Empirical contribution
27
patients with aphasia and 23 healthy controls watched video vignettes of natural
conversations, while their eye movements were measured. Study 3 is based on the same
data sample that was documented in study 2. In study 3, data analysis focused on the
frequency and the precise timing of eye movements in relation to the turn transitions
between the speaking actors in the videos. In contrast to other studies documented in
section 1.5.3, the timing of gaze shifts was analysed with respect to the end of the current
turn, and not to the beginning of the next turn. We found that aphasic patient shifted their
gaze less frequently at turn transitions. However, patients did not show significantly
increased gaze shift latencies, compared to healthy controls. The probability whether a gaze
shift would occur or not depended on the lexico-syntactic information provided before a
particular turn transition. In healthy controls, higher lexico-syntactic complexity led to higher
gaze shift probabilities. In contrast, decreasing gaze shift probability was associated with
higher lexico-syntactic complexity in aphasic patients. The timing of gaze shifts depended on
both the lexico-syntactic complexity and on the intonation variance. Healthy controls, but
not aphasic patients, showed shorter gaze shift latencies when both intonation variance and
lexico-syntactic complexity were increased. In addition, we found that brain lesions to the
posterior branch of the left arcuate fasciculus predicted the impact of lexico-syntactic
complexity on gaze shift latencies in aphasic patients.
Empirical contribution
28
3.2 Original publications
3.2.1 Study 1
Published as:
Eggenberger, N., Preisig, B. C., Hopfner, S., Vanbellingen, T., Schumacher, R., Nyffeler, T., . . .
MĂźri, R. M. (2016). Comprehension of co-speech gestures in aphasic patients: an eye
movement study. PloS one, 11(1). Retrieved from http://www.plosone.org
RESEARCH ARTICLE
Comprehension of Co-Speech Gestures in
Aphasic Patients: An Eye Movement Study
NoĂŤmi Eggenberger1
, Basil C. Preisig1
, Rahel Schumacher1,2
, Simone Hopfner1
,
Tim Vanbellingen1,3
, Thomas Nyffeler1,3
, Klemens Gutbrod2
, Jean-Marie Annoni4
,
Stephan Bohlhalter1,3
, Dario Cazzoli5
, RenĂŠ M. MĂźri1,2
*
1 Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital,
University Hospital Bern, and University of Bern, Bern, Switzerland, 2 Division of Cognitive and Restorative
Neurology, Department of Neurology, Inselspital, Bern University Hospital, and University of Bern, Bern,
Switzerland, 3 Neurology and Neurorehabilitation Center, Department of Internal Medicine, Luzerner
Kantonsspital, Luzern, Switzerland, 4 Neurology Unit, Laboratory for Cognitive and Neurological Sciences,
Department of Medicine, Faculty of Science, University of Fribourg, Fribourg, Switzerland,
5 Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland
* rene.mueri@insel.ch
Abstract
Background
Co-speech gestures are omnipresent and a crucial element of human interaction by facilitat-
ing language comprehension. However, it is unclear whether gestures also support lan-
guage comprehension in aphasic patients. Using visual exploration behavior analysis, the
present study aimed to investigate the influence of congruence between speech and co-
speech gestures on comprehension in terms of accuracy in a decision task.
Method
Twenty aphasic patients and 30 healthy controls watched videos in which speech was
either combined with meaningless (baseline condition), congruent, or incongruent gestures.
Comprehension was assessed with a decision task, while remote eye-tracking allowed
analysis of visual exploration.
Results
In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy,
while the congruent condition led to a significant increase in accuracy compared to baseline
accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy,
while the congruent condition did not significantly increase the accuracy. Visual exploration
analysis showed that patients fixated significantly less on the face and tended to fixate more
on the gesturing hands compared to controls.
Conclusion
Co-speech gestures play an important role for aphasic patients as they modulate compre-
hension. Incongruent gestures evoke significant interference and deteriorate patients’
PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 1 / 19
OPEN ACCESS
Citation: Eggenberger N, Preisig BC, Schumacher
R, Hopfner S, Vanbellingen T, Nyffeler T, et al. (2016)
Comprehension of Co-Speech Gestures in Aphasic
Patients: An Eye Movement Study. PLoS ONE 11(1):
e0146583. doi:10.1371/journal.pone.0146583
Editor: Antoni Rodriguez-Fornells, University of
Barcelona, SPAIN
Received: February 17, 2015
Accepted: December 18, 2015
Published: January 6, 2016
Copyright: Š 2016 Eggenberger et al. This is an
open access article distributed under the terms of the
Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any
medium, provided the original author and source are
credited.
Data Availability Statement: All relevant data are
within the paper and its Supporting Information files.
Funding: This study was entirely funded by the
Swiss National Science Foundation (SNF). The grant
(grant number 320030_138532/1) was received by
RenĂŠ MĂźri (RM). The funders had no role in study
design, data collection and analysis, decision to
publish, or preparation of the manuscript.
Competing Interests: The authors have declared
that no competing interests exist.
comprehension. In contrast, congruent gestures enhance comprehension in aphasic
patients, which might be valuable for clinical and therapeutic purposes.
Introduction
Human communication consists of both verbal (speech) and nonverbal (facial expressions, hand
gestures, body posture, etc.) elements. Gesturing is a crucial part of human nonverbal communi-
cation and includes co-speech gestures—communicative movements of hands and arms that
accompany concurrent speech [1–3]. After a left-hemispheric stroke, patients often develop
aphasia, defined as the acquired loss or impairment of language [4]. Impairments in verbal ele-
ments of language processing in aphasia are well known and extensively studied (e.g., [4, 5]).
However, less is known about potential mechanisms and impairments in non-verbal aspects and,
in particular, it is uncertain to what extent gesturing influences comprehension in aphasia.
There is evidence that gesturing may be preserved in aphasic patients [6–8], either facilitat-
ing speech processing (e.g., [9, 10]) or compensating for its impairment [6, 11]. This has led to
the theoretical assumption that speech and gesturing depend on two independent cortical sys-
tems [10, 12, 13]. However, other aphasic patients have considerable problems to produce or
understand gestures [3, 14–17]. Further research on gesture processing in aphasia can contrib-
ute to the ongoing debate of whether gesturing and speech rely on two independent cortical
systems (with the implication that gestures could substitute or facilitate impaired speech), or
whether they are organized in overlapping systems of language and action (e.g., [18–20]).
Studying the perception of co-speech gestures in aphasia is thus relevant for two more reasons.
First, aphasia can be considered as a disorder with supra-modal aspects [4]. Thus, it seems
important to gain insights into the mechanisms leading to impairment of not only verbal
aspects, but also of nonverbal ones, such as gesture perception and processing. Second, under-
standing the role of gestures in language comprehension in aphasic patients is also of clinical
relevance. Research in this field may lead to new therapeutic approaches, e.g., the development
of compensatory strategies for impaired verbal communication in aphasic patients, for instance
during the activities of daily living.
Only few studies (e.g., [21, 22]) examined perception of co-speech gestures in aphasic patients.
Previous research has mostly concentrated on comprehension of pantomime gestures (i.e. imita-
tion of actions by means of gestures produced in the absence of speech). To the best of our
knowledge, only two studies investigated speech and gesturing integration in aphasic patients. In
one of these studies, Records [23] presented information either auditory (target word), visually
(referential gesture towards target picture), or as a combination of both modalities (target word
and referential gesture). Furthermore, the authors varied the level of ambiguity of the input.
Aphasic patients had to indicate in a forced-choice task which picture had been described. The
authors found that when auditory and visual information were ambiguous, aphasic patients
relied more on the visually presented referential gesture [23]. More recently, in a single case
study with a similar forced-choice paradigm, Cocks, Sautin, Kita, Morgan, and Zlotowitz [24]
showed video vignettes of co-speech gestures to an aphasic patient and to a group of healthy con-
trols. All participants were asked to select among four alternatives (including a verbal and a ges-
tural match) the picture corresponding to the vignette they had watched. In order to solve the
task, the aphasic patient relied primarily on gestural information. In contrast, healthy controls
relied more on speech information [24]. The paradigm applied by Cocks and colleagues [24]
allowed to assess another important aspect of co-speech gestures, namely the phenomenon of
Comprehension of Co-Speech Gestures in Aphasic Patients
PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 2 / 19
multimodal gain. This phenomenon refers to the fact that the integration of two modalities (here
gesturing and speech) leads to better performance than one of the two modalities alone, as often
observed in healthy participants (e.g., [25–30]; for a review see [31]). Cocks et al.’s results showed
that this integration phenomenon was impaired in their aphasic patient, who showed a lower
multimodal gain than healthy controls [24]. However, due to the single case nature of the study,
it remains unclear whether this impairment can be generalized to all aphasic patients.
When studying speech and gesturing in aphasic patients, the frequent co-occurrence of
limb apraxia (i.e., a higher cognitive impairment of motor control and conduction of move-
ments [32, 33]), has to be taken into account. Lesions to left-hemispheric temporo-frontal
areas often lead to both language impairment and apraxia (e.g., [15, 18, 34]). This co-occur-
rence is due to the large overlap of the cortical representation of language, limb praxis, and
higher-order motor control. It is assumed [32] that apraxia influences not only gesture produc-
tion, but also gesture comprehension. The influence of apraxia on gesture comprehension has
been investigated by several studies (e.g., [15, 35–38]), but yielded controversial results. Hals-
band et al. [36] found impaired gesture imitation in apraxic patients, but no clear influence
on gesture comprehension. In contrast, Pazzaglia et al. [35] reported a strong correlation
between the performance in gesture imitation and gesture comprehension. The same group
[38] found also gesture comprehension deficits in patients with limb apraxia. In a later study,
they reported a specific deficit in gesture discrimination in a sample of patients with primary
progressive aphasia [37]. Apraxia-related deficits may further complicate communicative
attempts in aphasic patients [34]. In order to develop targeted speech-language therapy
approaches, it may therefore be valuable to know which patients would benefit from additional,
tailored gesture-based therapy.
Eye movement tracking has grown in importance in the field of cognitive neuroscience over
the last few decades. Eye-tracking is a highly suitable method to measure fixation behavior, and
to assess visual perception and attention to gestures (e.g., fixations on a moving / gesturing
hand) or to speech (e.g., fixations on a speaker’s lip movements) ([39]; for a review see also
[40]). Eye-tracking techniques have been used for the study of gestures and speech-related
behavior (e.g., [39, 41–43]). These investigations have shown that healthy participants spend as
much as 90–95% of the fixation time on the speaker’s face in live conditions, and about 88% in
video conditions. Only a minority of fixations is directed towards gestures [39, 42, 43]. Several
factors are supposed to influence visual exploration behavior in healthy participants, such as
the gestural amplitude and gestural holds throughout the execution of the gesture, the direction
of the speaker’s own gaze, and differences in gestural categories [39, 42]. However, it is unclear
whether aphasic patients display similar fixation patterns. To date, there do not appear to have
been any studies investigating the visual exploration behavior during the observation of con-
gruent or incongruent co-speech gestures.
The present study aimed to investigate two main research questions in a sample of aphasic
patients in comparison to healthy controls. First, we aimed to assess the influence of congru-
ence between speech and co-speech gestures on the comprehension of speech and gestures in
terms of accuracy in a decision task. Second, we were interested how the perception, i.e., the
visual exploration behavior, is influenced by different levels of congruence.
To assess these questions, we created an experiment comprising short video sequences with
varying levels of congruence between speech and co-speech gestures. Each video consisted of a
simple spoken sentence that was accompanied by a co-speech gesture. During the presentation
of the videos, infrared-based eye-tracking was used to measure visual exploration on the hands
and the face of the speaker. Three conditions of varying congruence were tested: a baseline con-
dition (i.e., speech combined with a meaningless gesture), a congruent condition (i.e., speech
and gesture having the same meaning), and an incongruent condition (i.e., speech combined
Comprehension of Co-Speech Gestures in Aphasic Patients
PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 3 / 19
with a non-matching, but semantically meaningful, gesture). After the presentation of each
video, the participants had to decide whether the spoken sentence was congruent with respect
to the gesture (yes/no answer, forced-choice). Accuracy in the forced-choice task and visual
exploration were assessed in a group of aphasic patients, and compared to those of a group of
age- and gender-matched healthy participants, who underwent the same procedure.
Concerning our first aim and in accordance with previous reports (e.g., [4, 44–48]), we
assume that aphasic patients generally display specific language processing (i.e., comprehen-
sion) deficits. We thus assume a priori that aphasic patients perform less accurately compared
to healthy controls in the baseline condition, where meaningless gestural stimuli provide nei-
ther additional information nor semantic interference. Our first hypothesis on the influence of
congruence between speech and co-speech gestures is based on previous findings showing that
co-speech gestures facilitate language comprehension in healthy participants, by providing
additional or even redundant semantic information (e.g., [25–29]; for a review see [31]). We
thus hypothesize that congruent co-speech gestures will have a facilitating effect on compre-
hension, due to the presentation of additional congruent information. In contrast, incongruent
gestures should result in reduced comprehension, due to the interference of the conflicting
semantic contents of speech and co-speech gesture.
Furthermore, we were interested in the role of apraxia. If apraxia plays an important role on
comprehension of speech and co-speech gestures, then we expect that the comprehension in
aphasic patients would not be influenced by different conditions of congruence, since the
patients would have no additional gain of the co-speech gesture information. We thus hypothe-
size that both aphasia and apraxia severity interfere with the comprehension of speech and ges-
turing, however, this interference could be differentially strong depending on patients’ specific
impairments as well as other cognitive deficits. In an additional control experiment, we tested
comprehension of isolated gestures, evaluating the possibility that comprehension of gestures
per se would be impaired.
The second aim was to analyze visual exploration behavior during performance of the task
and evaluate different exploration strategies between patients and healthy controls. We assume
that both healthy controls and patients would fixate the face region the most, as shown by
previous reports [39, 42, 43]. Due to the design of our study, where gestures play a prominent
role, we hypothesize nevertheless a larger amount of fixations on the hands than previously
reported. Furthermore, we hypothesize differences in visual exploration between aphasic
patients and healthy controls: due to the impaired language comprehension in aphasia, patients
may not use verbal information as efficiently as healthy controls. If aphasic patients rely more
on nonverbal information, such as co-speech gestures, then they should look more at the ges-
turing hands. This would result in increased fixation durations on the hands and decreased fix-
ation durations on the face, compared to healthy controls. However, if apraxia has a stronger
impact on visual exploration behavior than the language-related deficits (i.e., gestures become
less comprehensible and less informative for aphasic patients with apraxia), then we may find
decreased fixation durations on co-speech gestures and increased fixation durations on the face
in comparison to healthy controls. Taken together, we were hypothesizing that aphasia and
apraxia severity could differentially interfere with comprehension and the influence of congru-
ence between speech and gesturing on such comprehension.
Materials and Method
Declaration of ethical approval
All participants gave written informed consent prior to participation. Ethical approval to con-
duct this study was provided by the Ethical Committee of the State of Bern. The study was
Comprehension of Co-Speech Gestures in Aphasic Patients
PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 4 / 19
conducted in accordance with the principles of the latest version of the Declaration of Helsinki.
The individual in this manuscript has given written informed consent (as outlined in PLOS
consent form) to publish these case details.
2.1 Participants
Twenty patients with aphasia after a left-hemispheric stroke in cortical-subcortical regions (13
men, age: M = 56.7, SD = 13.5) and 30 age- and gender-matched healthy controls (14 men, age:
M = 51.9, SD = 17.8) participated in the study. There was no significant difference between the
two groups with respect to age (t(48) = 1.19; p = .23) or gender ratio (χ2(1) = 1.62; p = .25). All
participants were right-handed. The native language of all participants was German. Aphasic
patients were recruited from three different neurorehabilitation clinics in the German speaking
part of Switzerland (University Hospital Bern, Kantonsspital Luzern, and Spitalzentrum Biel).
At the time of examination, aphasic patients were in a sub-acute to chronic state (i.e., 1.5 to 55
months post stroke onset, M = 14.4, SD = 16.4). Aphasia diagnosis and classification was based
on neurological examination and on standardized diagnostic language tests, administered by
experienced speech-language therapists. Diagnostic measurements were carried out within two
weeks of participation in the study. To assess aphasia severity and classify aphasia type, two
subtests of the Aachener Aphasie Test (AAT, [49]) were carried out, i.e., the Token Test and
the Written Language Test. The AAT is a standardized, well-established diagnostic aphasia test
battery for German native speakers. Willmes, Poeck, Weniger and Huber [50] showed that the
discriminative validity of the two selected subtests (i.e., Token Test and Written Language) is
as good as the discriminative validity of the full test battery. In addition, the Test of Upper
Limb Apraxia (TULIA, [51]) was administered to assess limb apraxia. The TULIA is a recently
developed test, which consists of 48 items divided in two subscales (imitation of the experi-
menter demonstrating a gesture, and pantomime upon verbal command, respectively) with 24
items each. Each subscale consists of 8 non-symbolic (meaningless), 8 intransitive (communi-
cative), and 8 transitive (tool related) gestures. Rating is preferably performed by means of off-
line video analysis, on a 6-point rating scale (0–5), resulting in a score range of 0–240. Offline
video-based rating yields good to excellent internal consistency, as well as test-retest-reliability
and construct validity [51]. Twelve out of the 20 aphasic patients were additionally diagnosed
with apraxia according to the cut-off score defined by the TULIA test. Patients’ demographic
and clinical data are summarized in Tables 1 and 2. All participants had normal or corrected-
to-normal visual acuity and hearing, and no history of psychiatric disorders. Patients with
complete hemianopia involving the fovea or right-sided visual neglect were excluded from the
study.
2.2 Lesion Characteristics
Lesion mapping was performed by a collaborator who was naïve with respect to the patients’
test results and clinical presentation. An independent, second collaborator checked the accu-
racy of the mapping. Lesion mapping was performed using the MRIcron software [52]. We
used the same procedure as applied by Karnath et al. [53, 54]. Diffusion-weighted scans were
selected for the analysis when MRI sequences were obtained within the first 48 h post-stroke.
Magnetic resonance imaging (MRI) scans were available for 13 patients, and computed tomog-
raphy (CT) scans were available for the remaining seven patients. For the available MRI scans,
the boundary of the lesions was delineated directly on the individual MRI images for every sin-
gle transversal slice. Both the scan and the lesion shape were then mapped into approximate
Talairach space using the spatial normalization algorithm provided by SPM5 (http://www.fil.
ion.ucl.ac.uk/spm/). For CT scans, lesions were mapped directly on the T1-weighted MNI
Comprehension of Co-Speech Gestures in Aphasic Patients
PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 5 / 19
single subject template implemented in MRIcron [55] and visually controlled for different slice
angles. The mean lesion volume was 56.7cm3
(SEM = 13.56cm3
). Fig 1 shows the localisation
and the degree of overlap of the brain lesions, transferred to the standard ch2 brain template
implemented in MRICron ([55]).
2.3 Stimulus Material
Three experimental conditions were implemented; each consisting of different short video
sequences (Fig 2). In the first condition, the meaningless condition serving as a baseline, speech
was simultaneously combined with meaningless gesturing (e.g., an actress saying “to open a
bottle” and simultaneously putting her fingertips together). In the second condition, the con-
gruent condition, sequences contained simultaneous speech and gesturing with matching con-
tent (e.g., an actress saying “to rock a baby” and simultaneously mimicking the same action,
i.e., joining her hands in front of her torso, with the arms forming an oval shape, as if holding a
baby, and performing an oscillating movement with her hands and arms). In the third condi-
tion, the incongruent condition, sequences contained simultaneous speech and gesturing with
non-matching content (e.g., an actress saying “to brush your teeth” and simultaneously mim-
icking the action of dialing a number on a phone, hence creating incongruence between speech
and gesturing). Most of the videos (47 out of 75) depicted actual motor actions, while 28 videos
were symbolic actions (e.g., saying “it was so delicious” while showing a thumbs-up gesture of
approval). Each video sequence was followed by a forced-choice task, in which participants
were prompted to decide by key press whether speech and gesturing were congruent or not.
Congruent trials were correctly answered by pressing the “yes”-key, whereas both the meaning-
less and the incongruent trials were correctly answered by pressing the “no”-key. We therefore
Table 1. Overview of demographic and clinical data of aphasic patients and controls.
Patients Controls
n = 20 n = 30
Age Mean 56.7 51.9
(in years) Range 34–75 19.83
Gender Male 13 14
Female 7 16
Months post-onset Mean 14.4
SD 16.4
Number of errors in the Token Test Mean 18.6
(max. 50, cut-off > 7) SD 16.5
Range 0–50
Number of correct items in the Written Language Mean 56.2
(max. 90, cut-off < 81) SD 28.4
range 0–86
Number of correct items in the TULIA Mean 188.1
(max. 240, cut-off < 194) SD 21.5
range 141–221
Number of correct items in the TULIA Imitation Subscale Mean 94.7
(max. 120, cut-off < 95) SD 11.8
range 71–110
Notes. SD = Standard Deviation; Token Test: age-corrected error scores; Written Language: raw scores;
TULIA = test of upper limb apraxia.
doi:10.1371/journal.pone.0146583.t001
Comprehension of Co-Speech Gestures in Aphasic Patients
PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 6 / 19
Table 2. Detailed demographic and clinical data of aphasic patients.
AAT TULIA
Patient
Number
Gender Age Years of
Education
Etiology Lesion
Location
Months
post-
onset
Presence of
Hemiparesis
Aphasic
Syndrome
Type
Token
TestScore
Written
LanguageScore
OverallScore Imitation
Subscale
Score
1 M 61 14 isch L temp/
par
3.3 no amnestic 20 67 204 99
2 F 53 16 isch L temp/
par
4.5 no amnestic 0 n/a 221 106
3 M 74 15 isch L front/
temp
19.3 no Broca 0 80 201 97
4 F 51 12 isch L front/
temp
1.7 no Broca 18 41 171 97
5 F 40 17 hem L front/par 4.0 yes Broca 50 n/a 159 79
6 F 66 12 isch L temp 1.6 no amnestic 8 53 206 100
7 F 46 12 isch L front/
temp
41.6 no Broca 7 60 212 105
8 M 71 12 isch L temp/
par
2.0 no Broca 50 n/a 156 71
9 M 73 14 isch L temp 1.5 no Wernicke 17 81 168 81
10 F 40 17 isch L temp/
par
4.6 no amnestic 19 80 207 110
11 M 69 12 isch L temp 4.7 no Broca 0 70 216 104
12 F 36 17 hem L front/
temp/par
55.0 yes global 39 23 186 94
13 M 47 13 isch L front/
temp/par
36.0 no Broca 11 75 192 97
14 M 34 11 vasc L front/
temp/par
13.3 yes global 50 0 141 74
15 M 56 12 isch L temp/
par
37.5 no Broca 11 n/a 189 94
16 M 67 13 isch L temp/
par
30.0 no Wernicke 27 86 195 107
17 M 75 12 isch L temp 8.7 no Wernicke 0 60 168 80
18 M 62 14 isch L temp/
par
6.0 no Wernicke 6 69 188 91
19 M 70 12 hem L temp/
par
10.7 no Wernicke 7 67 192 100
20 M 42 12 isch bilateral 2.0 no Wernicke 13 79 189 108
Notes. L = left; Etiology: isch = ischaemic infarction in the territory of the medial cerebral artery, hem = hemorrhagic infarction (parenchymal hemorrhage), vasc = vasculitis; Lesion
Location: front = frontal, par = parietal, temp = temporal; AAT = Aachener Aphasie Test, Token Test: age-corrected error scores; Written Language: raw scores; n/a: not applicable;
TULIA = test of upper limb apraxia.
doi:10.1371/journal.pone.0146583.t002
Comprehension
of
Co-Speech
Gestures
in
Aphasic
Patients
PLOS
ONE
|
DOI:10.1371/journal.pone.0146583
January
6,
2016
7
/
19
Fig 1. Lesions maps of the 20 aphasic patients, plotted on axial slices oriented according to the radiological convention. Slices are depicted in 8mm
descending steps. The Z position of each axial slice in the Talairach stereotaxic space is presented at the bottom of the figure. The number of patients with
damage involving a specific region is color-coded according to the legend.
doi:10.1371/journal.pone.0146583.g001
Fig 2. Examples of the video sequences used as stimuli, each consisting of simultaneous speech and gesturing. The sequences were either
congruent (1), incongruent (2), or speech was combined with a meaningless gesture (3).
doi:10.1371/journal.pone.0146583.g002
Comprehension of Co-Speech Gestures in Aphasic Patients
PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 8 / 19
decided to include more trials in the congruent condition. Out of the total 75 videos, 33 were
congruent, 25 were incongruent, and 17 were meaningless. A list of the content of the original
stimuli in German, as well as their English translation, can be found in S1 Appendix.
2.4 Apparatus and Eye-Tracking
Eye movements were measured by means of a remote RED eye-tracking system (RED 250, Sen-
soMotoric Instruments GmbH, Teltow, Germany), attached directly under the screen used for
stimulus presentation. This infrared-based system allows the contactless measurement of the
eye movements, of the number of visual fixations on specific regions of interest (ROIs), of the
cumulative or mean fixation duration, and of the percentage gaze time on specific ROIs. A
major advantage of the RED eye-tracking system is that fixation or stabilization of the head is
not necessary, since the system is equipped with an automatic head-movement compensation
mechanism (within a range of 40 x 20 cm, at approximately 70 cm viewing distance). The sys-
tem was set at 60 Hz sampling rate (temporal resolution).
2.5 Procedure
Participants were seated on a chair, at a distance varying between 60 and 80cm, facing the 22”
computer screen where the videos were presented. A standard keyboard was placed in front of
the participants at a comfortable distance. Participants were asked to carefully watch the video
sequences and listen to the simultaneously presented speech. Moreover, they were instructed
to decide, after each sequence, whether speech and gesturing had been congruent or incongru-
ent. For this purpose, a static question slide appeared after each sequence. Participants had to
enter their response by pressing one out of two keys on a standard keyboard within 6 seconds.
The answer keys were color-coded, i.e., a green sticker indicating “yes” (covering the X-key of
the keyboard), and a red sticker indicating “no” (covering the M-key of the keyboard). No
additional verbal instruction was given. Three practice trials (one for each condition, i.e., con-
gruent, incongruent, and baseline) were administered prior to the main experiment. During
practice, feedback was given to the participants. Erroneous trials were explained and repeated
to enhance task comprehension.
In the main experiment, the 75 video sequences were presented in randomized order. Four
short breaks were included in the design in order to avoid fatigue, resulting in five blocks of 15
random sequences each. Before each block, a 9-point calibration procedure was performed, in
order to ensure accurate tracking of participants’ gaze. During calibration, participants were
requested to fixate as accurately as possible 9 points, appearing sequentially and one at a time
on the screen. The quality of the calibration was assessed by the experimenter, aiming for a
gaze accuracy of 1° visual angle on the x- and y-coordinates or better. If this criterion was not
met, the calibration procedure was repeated.
To assess participants’ comprehension of isolated gestures, we performed an additional
control experiment. The aim of this experiment was to exclude the possibility that gesture com-
prehension per se was impaired, which in turn might have influenced comprehension in com-
bined conditions (i.e., speech and gesturing). In this control experiment, participants were
presented with a block of 15 video sequences in randomized order. The video sequences con-
tained gestures without any verbal utterance. Participants were asked to carefully watch the
gestures. After each video sequence, they were asked to indicate the meaning of the presented
gesture by means of a forced-choice task. Three possible definitions of each gesture were
presented, i.e. the correct definition, a semantic distractor, and a phonological distractor.
Comprehension of Co-Speech Gestures in Aphasic Patients
PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 9 / 19
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor
Aphasia And Dialogue  What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor

More Related Content

Similar to Aphasia And Dialogue What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor

Arthi protoco1811l.
Arthi protoco1811l.Arthi protoco1811l.
Arthi protoco1811l.gosha30
 
oral medicine and radiology.PDF
oral medicine and radiology.PDForal medicine and radiology.PDF
oral medicine and radiology.PDFKaterineRiquelme3
 
Research in ASLP.pptx
Research in ASLP.pptxResearch in ASLP.pptx
Research in ASLP.pptxManjuSingh118444
 
Zg13h.color.atlas.of.head.and.neck.surgery.a.stepby step.guide
Zg13h.color.atlas.of.head.and.neck.surgery.a.stepby step.guideZg13h.color.atlas.of.head.and.neck.surgery.a.stepby step.guide
Zg13h.color.atlas.of.head.and.neck.surgery.a.stepby step.guideAhmed Nabil
 
Overview of maxillofacial_prosthetics_pd (1)
Overview of maxillofacial_prosthetics_pd (1)Overview of maxillofacial_prosthetics_pd (1)
Overview of maxillofacial_prosthetics_pd (1)A.gabar Alsamani
 
J.1708 8240.2006.00017.x
J.1708 8240.2006.00017.xJ.1708 8240.2006.00017.x
J.1708 8240.2006.00017.xM Devia
 
Training in adult cochlear iimpant users
Training in adult cochlear iimpant usersTraining in adult cochlear iimpant users
Training in adult cochlear iimpant usersHEARnet _
 
Alternative Procedure to Improve the Stability of Mandibular Complete Denture...
Alternative Procedure to Improve the Stability of Mandibular Complete Denture...Alternative Procedure to Improve the Stability of Mandibular Complete Denture...
Alternative Procedure to Improve the Stability of Mandibular Complete Denture...Miriam E. Catalina Rojas Tapia
 
Diagnostic aids in orthodontics
Diagnostic aids in  orthodonticsDiagnostic aids in  orthodontics
Diagnostic aids in orthodonticsMMCDSR , Haryana
 
Comparison of intraoral harvest sites for corticocancellous bone grafts
Comparison of intraoral harvest sites for corticocancellous bone graftsComparison of intraoral harvest sites for corticocancellous bone grafts
Comparison of intraoral harvest sites for corticocancellous bone graftsDr. SHEETAL KAPSE
 
What to watch out for in your writing
What to watch out for in your writingWhat to watch out for in your writing
What to watch out for in your writingDerek Stiles
 
Implant maintenance: A clinical update
Implant maintenance: A clinical updateImplant maintenance: A clinical update
Implant maintenance: A clinical updateMinkle Gulati
 
The Binocular Continuum by Harris, Paul
The Binocular Continuum by 	Harris, PaulThe Binocular Continuum by 	Harris, Paul
The Binocular Continuum by Harris, PaulMero Eye
 
Aphasia_Monara_2022.pdf
Aphasia_Monara_2022.pdfAphasia_Monara_2022.pdf
Aphasia_Monara_2022.pdfVictorHugo214270
 
Philosophies-in-full-mouth-rehabilitation-a-systematic-review
Philosophies-in-full-mouth-rehabilitation-a-systematic-reviewPhilosophies-in-full-mouth-rehabilitation-a-systematic-review
Philosophies-in-full-mouth-rehabilitation-a-systematic-reviewAravind Krishnan
 

Similar to Aphasia And Dialogue What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor (20)

Neuro poster 48x48
Neuro poster 48x48Neuro poster 48x48
Neuro poster 48x48
 
Arthi protoco1811l.
Arthi protoco1811l.Arthi protoco1811l.
Arthi protoco1811l.
 
oral medicine and radiology.PDF
oral medicine and radiology.PDForal medicine and radiology.PDF
oral medicine and radiology.PDF
 
Research in ASLP.pptx
Research in ASLP.pptxResearch in ASLP.pptx
Research in ASLP.pptx
 
Zg13h.color.atlas.of.head.and.neck.surgery.a.stepby step.guide
Zg13h.color.atlas.of.head.and.neck.surgery.a.stepby step.guideZg13h.color.atlas.of.head.and.neck.surgery.a.stepby step.guide
Zg13h.color.atlas.of.head.and.neck.surgery.a.stepby step.guide
 
Diagnostic Palpation in Osteopathic Medicine
Diagnostic Palpation in Osteopathic MedicineDiagnostic Palpation in Osteopathic Medicine
Diagnostic Palpation in Osteopathic Medicine
 
Overview of maxillofacial_prosthetics_pd (1)
Overview of maxillofacial_prosthetics_pd (1)Overview of maxillofacial_prosthetics_pd (1)
Overview of maxillofacial_prosthetics_pd (1)
 
J.1708 8240.2006.00017.x
J.1708 8240.2006.00017.xJ.1708 8240.2006.00017.x
J.1708 8240.2006.00017.x
 
Training in adult cochlear iimpant users
Training in adult cochlear iimpant usersTraining in adult cochlear iimpant users
Training in adult cochlear iimpant users
 
Alternative Procedure to Improve the Stability of Mandibular Complete Denture...
Alternative Procedure to Improve the Stability of Mandibular Complete Denture...Alternative Procedure to Improve the Stability of Mandibular Complete Denture...
Alternative Procedure to Improve the Stability of Mandibular Complete Denture...
 
Diagnostic aids in orthodontics
Diagnostic aids in  orthodonticsDiagnostic aids in  orthodontics
Diagnostic aids in orthodontics
 
Comparison of intraoral harvest sites for corticocancellous bone grafts
Comparison of intraoral harvest sites for corticocancellous bone graftsComparison of intraoral harvest sites for corticocancellous bone grafts
Comparison of intraoral harvest sites for corticocancellous bone grafts
 
What to watch out for in your writing
What to watch out for in your writingWhat to watch out for in your writing
What to watch out for in your writing
 
Implant maintenance: A clinical update
Implant maintenance: A clinical updateImplant maintenance: A clinical update
Implant maintenance: A clinical update
 
Neurodevelopmental risks of non syndromic craniosynostosis
Neurodevelopmental risks of non syndromic craniosynostosisNeurodevelopmental risks of non syndromic craniosynostosis
Neurodevelopmental risks of non syndromic craniosynostosis
 
The Binocular Continuum by Harris, Paul
The Binocular Continuum by 	Harris, PaulThe Binocular Continuum by 	Harris, Paul
The Binocular Continuum by Harris, Paul
 
Aphasia_Monara_2022.pdf
Aphasia_Monara_2022.pdfAphasia_Monara_2022.pdf
Aphasia_Monara_2022.pdf
 
125th publication sjmps- 1st name
125th publication  sjmps- 1st name125th publication  sjmps- 1st name
125th publication sjmps- 1st name
 
Philosophies-in-full-mouth-rehabilitation-a-systematic-review
Philosophies-in-full-mouth-rehabilitation-a-systematic-reviewPhilosophies-in-full-mouth-rehabilitation-a-systematic-review
Philosophies-in-full-mouth-rehabilitation-a-systematic-review
 
Retinoscopy
RetinoscopyRetinoscopy
Retinoscopy
 

More from Karen Benoit

Writing A Sociology Essay Navigating The Societal La
Writing A Sociology Essay Navigating The Societal LaWriting A Sociology Essay Navigating The Societal La
Writing A Sociology Essay Navigating The Societal LaKaren Benoit
 
Citations Examples For Research Paper. Online assignment writing service.
Citations Examples For Research Paper. Online assignment writing service.Citations Examples For Research Paper. Online assignment writing service.
Citations Examples For Research Paper. Online assignment writing service.Karen Benoit
 
5Th Grade 5 Paragraph Essay Samples - Structurin
5Th Grade 5 Paragraph Essay Samples - Structurin5Th Grade 5 Paragraph Essay Samples - Structurin
5Th Grade 5 Paragraph Essay Samples - StructurinKaren Benoit
 
In An Essay Films Are Underlined - Persepolisthesis.Web.Fc2.Com
In An Essay Films Are Underlined - Persepolisthesis.Web.Fc2.ComIn An Essay Films Are Underlined - Persepolisthesis.Web.Fc2.Com
In An Essay Films Are Underlined - Persepolisthesis.Web.Fc2.ComKaren Benoit
 
Essay On Newspaper PDF Newspapers Public Opinion
Essay On Newspaper  PDF  Newspapers  Public OpinionEssay On Newspaper  PDF  Newspapers  Public Opinion
Essay On Newspaper PDF Newspapers Public OpinionKaren Benoit
 
Printable Writing Paper Ramar Pinterest Ramar
Printable Writing Paper  Ramar  Pinterest  RamarPrintable Writing Paper  Ramar  Pinterest  Ramar
Printable Writing Paper Ramar Pinterest RamarKaren Benoit
 
How To Write A Good Expository Essay -. Online assignment writing service.
How To Write A Good Expository Essay -. Online assignment writing service.How To Write A Good Expository Essay -. Online assignment writing service.
How To Write A Good Expository Essay -. Online assignment writing service.Karen Benoit
 
8 Tips That Will Make You Guru In Essay Writing - SCS
8 Tips That Will Make You Guru In Essay Writing - SCS8 Tips That Will Make You Guru In Essay Writing - SCS
8 Tips That Will Make You Guru In Essay Writing - SCSKaren Benoit
 
Benefits Of Tertiary Education. What Are The Be
Benefits Of Tertiary Education. What Are The BeBenefits Of Tertiary Education. What Are The Be
Benefits Of Tertiary Education. What Are The BeKaren Benoit
 
Essay On Money Money Essay For Students And Children In En
Essay On Money  Money Essay For Students And Children In EnEssay On Money  Money Essay For Students And Children In En
Essay On Money Money Essay For Students And Children In EnKaren Benoit
 
ALBERT CAMUS ON THE NOTION OF SUICIDE, AND THE VALUE OF.pdf
ALBERT CAMUS ON THE NOTION OF SUICIDE, AND THE VALUE OF.pdfALBERT CAMUS ON THE NOTION OF SUICIDE, AND THE VALUE OF.pdf
ALBERT CAMUS ON THE NOTION OF SUICIDE, AND THE VALUE OF.pdfKaren Benoit
 
Automation A Robotic Arm (FYP) Thesis.pdf
Automation  A Robotic Arm (FYP) Thesis.pdfAutomation  A Robotic Arm (FYP) Thesis.pdf
Automation A Robotic Arm (FYP) Thesis.pdfKaren Benoit
 
12th Report on Carcinogens.pdf
12th Report on Carcinogens.pdf12th Report on Carcinogens.pdf
12th Report on Carcinogens.pdfKaren Benoit
 
11.Bio Inspired Approach as a Problem Solving Technique.pdf
11.Bio Inspired Approach as a Problem Solving Technique.pdf11.Bio Inspired Approach as a Problem Solving Technique.pdf
11.Bio Inspired Approach as a Problem Solving Technique.pdfKaren Benoit
 
A Brief Overview Of Ethiopian Film History.pdf
A Brief Overview Of Ethiopian Film History.pdfA Brief Overview Of Ethiopian Film History.pdf
A Brief Overview Of Ethiopian Film History.pdfKaren Benoit
 
A Commentary on Education and Sustainable Development Goals.pdf
A Commentary on Education and Sustainable Development Goals.pdfA Commentary on Education and Sustainable Development Goals.pdf
A Commentary on Education and Sustainable Development Goals.pdfKaren Benoit
 
A Historical Overview of Writing and Technology.pdf
A Historical Overview of Writing and Technology.pdfA Historical Overview of Writing and Technology.pdf
A Historical Overview of Writing and Technology.pdfKaren Benoit
 
A History of Ancient Rome - Mary Beard.pdf
A History of Ancient Rome - Mary Beard.pdfA History of Ancient Rome - Mary Beard.pdf
A History of Ancient Rome - Mary Beard.pdfKaren Benoit
 
A Review of Problem Solving Capabilities in Lean Process Management.pdf
A Review of Problem Solving Capabilities in Lean Process Management.pdfA Review of Problem Solving Capabilities in Lean Process Management.pdf
A Review of Problem Solving Capabilities in Lean Process Management.pdfKaren Benoit
 
Art Archaeology the Ineligible project (2020) - extended book chapter.pdf
Art Archaeology  the Ineligible project (2020) - extended book chapter.pdfArt Archaeology  the Ineligible project (2020) - extended book chapter.pdf
Art Archaeology the Ineligible project (2020) - extended book chapter.pdfKaren Benoit
 

More from Karen Benoit (20)

Writing A Sociology Essay Navigating The Societal La
Writing A Sociology Essay Navigating The Societal LaWriting A Sociology Essay Navigating The Societal La
Writing A Sociology Essay Navigating The Societal La
 
Citations Examples For Research Paper. Online assignment writing service.
Citations Examples For Research Paper. Online assignment writing service.Citations Examples For Research Paper. Online assignment writing service.
Citations Examples For Research Paper. Online assignment writing service.
 
5Th Grade 5 Paragraph Essay Samples - Structurin
5Th Grade 5 Paragraph Essay Samples - Structurin5Th Grade 5 Paragraph Essay Samples - Structurin
5Th Grade 5 Paragraph Essay Samples - Structurin
 
In An Essay Films Are Underlined - Persepolisthesis.Web.Fc2.Com
In An Essay Films Are Underlined - Persepolisthesis.Web.Fc2.ComIn An Essay Films Are Underlined - Persepolisthesis.Web.Fc2.Com
In An Essay Films Are Underlined - Persepolisthesis.Web.Fc2.Com
 
Essay On Newspaper PDF Newspapers Public Opinion
Essay On Newspaper  PDF  Newspapers  Public OpinionEssay On Newspaper  PDF  Newspapers  Public Opinion
Essay On Newspaper PDF Newspapers Public Opinion
 
Printable Writing Paper Ramar Pinterest Ramar
Printable Writing Paper  Ramar  Pinterest  RamarPrintable Writing Paper  Ramar  Pinterest  Ramar
Printable Writing Paper Ramar Pinterest Ramar
 
How To Write A Good Expository Essay -. Online assignment writing service.
How To Write A Good Expository Essay -. Online assignment writing service.How To Write A Good Expository Essay -. Online assignment writing service.
How To Write A Good Expository Essay -. Online assignment writing service.
 
8 Tips That Will Make You Guru In Essay Writing - SCS
8 Tips That Will Make You Guru In Essay Writing - SCS8 Tips That Will Make You Guru In Essay Writing - SCS
8 Tips That Will Make You Guru In Essay Writing - SCS
 
Benefits Of Tertiary Education. What Are The Be
Benefits Of Tertiary Education. What Are The BeBenefits Of Tertiary Education. What Are The Be
Benefits Of Tertiary Education. What Are The Be
 
Essay On Money Money Essay For Students And Children In En
Essay On Money  Money Essay For Students And Children In EnEssay On Money  Money Essay For Students And Children In En
Essay On Money Money Essay For Students And Children In En
 
ALBERT CAMUS ON THE NOTION OF SUICIDE, AND THE VALUE OF.pdf
ALBERT CAMUS ON THE NOTION OF SUICIDE, AND THE VALUE OF.pdfALBERT CAMUS ON THE NOTION OF SUICIDE, AND THE VALUE OF.pdf
ALBERT CAMUS ON THE NOTION OF SUICIDE, AND THE VALUE OF.pdf
 
Automation A Robotic Arm (FYP) Thesis.pdf
Automation  A Robotic Arm (FYP) Thesis.pdfAutomation  A Robotic Arm (FYP) Thesis.pdf
Automation A Robotic Arm (FYP) Thesis.pdf
 
12th Report on Carcinogens.pdf
12th Report on Carcinogens.pdf12th Report on Carcinogens.pdf
12th Report on Carcinogens.pdf
 
11.Bio Inspired Approach as a Problem Solving Technique.pdf
11.Bio Inspired Approach as a Problem Solving Technique.pdf11.Bio Inspired Approach as a Problem Solving Technique.pdf
11.Bio Inspired Approach as a Problem Solving Technique.pdf
 
A Brief Overview Of Ethiopian Film History.pdf
A Brief Overview Of Ethiopian Film History.pdfA Brief Overview Of Ethiopian Film History.pdf
A Brief Overview Of Ethiopian Film History.pdf
 
A Commentary on Education and Sustainable Development Goals.pdf
A Commentary on Education and Sustainable Development Goals.pdfA Commentary on Education and Sustainable Development Goals.pdf
A Commentary on Education and Sustainable Development Goals.pdf
 
A Historical Overview of Writing and Technology.pdf
A Historical Overview of Writing and Technology.pdfA Historical Overview of Writing and Technology.pdf
A Historical Overview of Writing and Technology.pdf
 
A History of Ancient Rome - Mary Beard.pdf
A History of Ancient Rome - Mary Beard.pdfA History of Ancient Rome - Mary Beard.pdf
A History of Ancient Rome - Mary Beard.pdf
 
A Review of Problem Solving Capabilities in Lean Process Management.pdf
A Review of Problem Solving Capabilities in Lean Process Management.pdfA Review of Problem Solving Capabilities in Lean Process Management.pdf
A Review of Problem Solving Capabilities in Lean Process Management.pdf
 
Art Archaeology the Ineligible project (2020) - extended book chapter.pdf
Art Archaeology  the Ineligible project (2020) - extended book chapter.pdfArt Archaeology  the Ineligible project (2020) - extended book chapter.pdf
Art Archaeology the Ineligible project (2020) - extended book chapter.pdf
 

Recently uploaded

CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...RKavithamani
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 

Recently uploaded (20)

CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 

Aphasia And Dialogue What Eye Movements Reveal About The Processing Of Co- Speech Gestures And The Prediction Of Turn Transitions Thesis Co-Advisor

  • 1. Graduate School for Health Sciences University of Bern Aphasia and dialogue: What eye movements reveal about the processing of co- speech gestures and the prediction of turn transitions PhD Thesis submitted by Basil Christoph Preisig from Schwellbrunn, AR for the degree of PhD in Health Sciences (Neurosciences) Thesis advisor Prof. Dr. med. RenĂŠ MĂźri Perception and Eye Movement Laboratory Department of Neurology and Clinical Research, Faculty of Medicine, University of Bern Thesis co-advisor Prof. Dr. med. Jean-Marie Annoni Neurology Unit Department of Medicine, Faculty of Science, University of Fribourg and H-FR
  • 2.
  • 3. Accepted by the Faculty of Medicine and the Faculty of Human Sciences of the University of Bern Bern, Dean of the Faculty of Medicine Bern, Dean of the Faculty of Human Sciences
  • 4. Liebe Helene fĂźr deine Liebe, deine Geduld und deine Zuversicht
  • 5. I Table of contents Abstract_______________________________________________________________________ III Zusammenfassung ______________________________________________________________IV Danksagung ___________________________________________________________________ V 1 Introduction_______________________________________________________________ 1 1.1 Dialogue _________________________________________________________________________ 1 1.2 Gesture__________________________________________________________________________ 2 1.2.1 Why do people gesture and what makes a movement a gesture? ______________________ 2 1.2.2 The classification of co-speech gestures ___________________________________________ 3 1.2.3 Neural correlates of gesturing ___________________________________________________ 5 1.3 Turn-taking _______________________________________________________________________ 7 1.4 Aphasia: A multimodal disorder ______________________________________________________ 9 1.4.1 Definition, phenomenology, and neuroanatomy ____________________________________ 9 1.4.2 Aphasia diagnosis and clinical syndromes_________________________________________ 11 1.4.3 Aphasia, apraxia, and gesture __________________________________________________ 13 1.4.4 Turn-taking in aphasia ________________________________________________________ 16 1.5 The assessment of visual exploration _________________________________________________ 17 1.5.1 The function and the recording of eye movements _________________________________ 17 1.5.2 Visual exploration of co-speech gestures _________________________________________ 19 1.5.3 Eye movement analysis and turn-taking __________________________________________ 20 2 Rationale and aims ________________________________________________________ 22 3 Empirical contribution______________________________________________________ 25 3.1 Synopsis of the studies_____________________________________________________________ 25 3.2 Original publications ______________________________________________________________ 28 3.2.1 Study 1 ____________________________________________________________________ 28 3.2.2 Study 2 ____________________________________________________________________ 48 3.2.3 Study 3 ____________________________________________________________________ 61 4 General discussion_________________________________________________________ 99 5 Outlook ________________________________________________________________ 110
  • 6. II 6 References ______________________________________________________________ 112 7 Curriculum vitae Basil Christoph Preisig_______________________________________ 125 8 Complete list of publications _______________________________________________ 126 9 Declaration of Originality __________________________________________________ 128
  • 7. III Abstract Two intriguing aspects of human communication are the occurrence of co-speech gestures and the alternating exchange of speech acts through turn-taking. The present thesis aimed to investigate both aspects by means of eye movement recordings in patients with post- stroke aphasia. In particular, it was assessed whether patients’ linguistic deficits lead to altered visual processing of co-speech gestures and whether aphasic impairments have an impact on the capability to predict turn transitions. The findings obtained from two studies imply that co-speech gesture processing is not affected in aphasic patients. On the contrary, we found that patients benefit from multi- modal information provided through the congruent presentation of speech and co-speech gestures. However, aphasic patients’ focused less on the visual speech component (i.e., fewer fixations on the speaker’s face). This could be an indicator for a general deficit to integrate audio-visual information causing aphasic patients to avoid interference between the visual and the acoustic speech signal. In a third study, we addressed the frequency and the precise timing of eye movements in relation to the turn transitions between speaking actors. Patients with aphasia shifted their gaze less frequently according to the flow of the conversation, although there was no difference with regard to the timing of their gaze shifts. In this study, we could further show that higher lexico-syntactic processing demands lead to a reduced gaze shift probability in aphasic patients. This finding might imply that patients miss more opportunities to make their own verbal contributions when talking to their family members. Future studies should target gesture processing and turn-taking capabilities in aphasic patients during face-to-face interaction. This is important in order to verify if the presented findings can be generalized to patients’ everyday life.
  • 8. IV Zusammenfassung Gestik und der gegenseitige Austausch von Information sind zentrale Bestandteile unserer alltäglichen Kommunikation. Das Ziel der vorliegenden Dissertation war es, mittels Augenbewegungsmessung, die zugrundeliegende Verarbeitung von Gestik bei Patienten mit einer erworbenen SprachstĂśrung (Aphasie) zu untersuchen. Zudem wurde die Rolle der Aphasie bei der Antizipation (i.e., Vorhersage) von Sprecherwechseln ĂźberprĂźft. Die Ergebnisse aus zwei Studien deuten darauf hin, dass bei Patienten mit einer Aphasie die Verarbeitung sprachbegleitender Gestik nicht beeinträchtigt ist. Im Gegenteil, Patienten profitieren von der multimodalen Information durch sprachbegleitende Gestik mit kongruentem Inhalt. Was sich hingegen zeigte, Aphasie-Patienten fokussieren weniger auf die visuelle Komponente des Sprachsignals (i.e., sie blicken weniger ins Gesicht des Sprechers). Dies kĂśnnte auf ein allgemeines Defizit bei der Integration von audio-visueller Information hindeuten. Aphasie-Patienten vermeiden demzufolge Interferenz zwischen dem visuellen und dem akustischen Sprachsignal. In einer dritten Studie wurden die Frequenz und das präzise Timing von Blickbewegungen in Abhängigkeit von Sprecherwechseln ĂźberprĂźft. Es zeigte sich, dass Aphasie-Patienten weniger häufig mit ihrem Blick den Sprecherwechseln im Dialog folgten. Zudem sank die Wahrscheinlichkeit dafĂźr, dass Aphasie-Patienten ihren Blick dem kommenden Sprecher zuwenden, mit steigender Komplexität des lexiko- syntaktischen Inhalts. In Bezug auf das Timing der Blickbewegungen ergaben sich keine Unterschied zu gesunden Kontrollprobanden. Patienten reagierten gleich schnell wie Gesunde. Die Ergebnisse kĂśnnten darauf hindeuten, dass Patienten im Alltag oft den richtigen Zeitpunkt verpassen, um selbst etwas zum Gespräch beizutragen. KĂźnftige Studien sollten sich auf die visuelle Wahrnehmung von Gestik und das Verhalten beim Sprecherwechsel im direkten Gespräch mit den Patienten konzentrieren. Dies ist wichtig um zu ĂźberprĂźfen, ob sich die präsentierten Ergebnisse auch auf den Alltag der Patienten Ăźbertragen lassen.
  • 9. V Danksagung Ich mĂśchte mich bei allen bedanken, die mich während meiner Zeit als Doktorand unterstĂźtzt haben und dadurch den Abschluss meiner Dissertation erst mĂśglich gemacht haben. Die letzten dreieinhalb Jahre waren eine sehr erfĂźllende Zeit. Ich vermisse sie jetzt schon. Zuerst mĂśchte ich mich bei allen Versuchspersonen bedanken, welche im Laufe meines Doktorats an unseren Studien teilgenommen haben. Ohne ihren Einsatz wäre diese Arbeit nicht mĂśglich gewesen. FĂźr das Gelingen war auch die gute Zusammenarbeit mit der Logopädie entscheidend. An dieser Stelle herzlichen Dank an alle Logopädinnen der kognitiven restorativen Neurologie des Inselspitals, des Neurologie- und Neurorehabilitationszentrum am Luzerner Kantonsspital, und des Spitalzentrum Biel. Speziellen Dank fĂźr die Koordination und Organisation von Terminen sowie das DurchfĂźhren zusätzlicher Diagnostik gebĂźhrt Sandra, Susanne, Julia, Melanie, Marianne, Corina, Carmen, Gabriela, Monica und Nicole. Reto und Gianni mĂśchte ich herzlich fĂźr das vorzĂźgliche Videomaterial danken, welches wir dank ihrer UnterstĂźtzung im Fernsehstudio des Inselspitals aufnehmen konnten. Lieber RenĂŠ, herzlichen Dank, dass du mir diese Doktorarbeit ermĂśglicht hast. In den letzten dreieinhalb Jahren habe ich sehr viel von dir gelernt. Die Arbeit auf dem Projekt Aphasie und Gestik war eine erfĂźllende und sehr spannende Herausforderung. Vielen Dank, dass du dir so viel Zeit fĂźr mich genommen hast. Lieber Jean-Marie, auch dir herzlichen Dank fĂźr die Betreuung meiner Arbeit und dafĂźr, dass du dir sehr viel Zeit fĂźr meine Anliegen genommen hast. Liebes Laborteam (Simone, NoĂŤmi, Rebecca, Rahel, Diego, Tobia, Tim und Dario), herzlichen Dank fĂźr die unvergessliche Zeit. Liebe NoĂŤmi, liebe Simone, herzlichen Dank habt ihr mit mir all die HĂśhen und Tiefen des Doktorandenlebens geteilt. NoĂŤmi mĂśchte ich herzlich fĂźr die hervorragende Zusammenarbeit im Rahmen des Forschungsprojekts danken. Rebecca und NoĂŤmi danke ich zudem herzlich fĂźrs Gegenlesen meiner Doktorarbeit. Dear Giuseppe, thank you very much for all the support in engineering. You taught me that programming is
  • 10. VI actually a lot of fun. Zudem mĂśchte ich mich bei Klemens dafĂźr bedanken, dass ich immer auf sein neuropsychologisches Fachwissen zurĂźckgreifen konnte. Vielen Dank auch an die Gerontechnologie- und Rehabilitationsgruppe fĂźr die sehr gute Zusammenarbeit. Liebe Mama, lieber Papa, herzlichen Dank fĂźr eure Liebe und Hingabe, fĂźr eure UnterstĂźtzung während der Ausbildung, fĂźr eure Geduld und dafĂźr, dass ihr immerzu meine Interessen geweckt und gefĂśrdert habt. Lieber Moritz, vielen Dank bist du mir so ein treuer Freund und Bruder. Bei unseren gemeinsamen Trainingseinheiten im Fitnesscenter konnte ich mich immer gut vom Arbeitsstress befreien und neue Energie tanken. Vielen Dank an all meine Kollegen, Verwandten und Bekannten, die mich durch diese Zeit begleitet haben. Ganz besonders mĂśchte ich mich bei Martin und Mischa bedanken. Auf eure Freundschaft kann ich seit dem Kindergarten zählen, das bedeutet mir sehr viel. Liebe Helene, du bist das grosse GlĂźck meines Lebens. Herzlichen Dank fĂźr deine Geduld und Liebe. Du gibst mir sehr viel Kraft.
  • 11. Introduction 1 1 Introduction Gestures and conversational exchange of speaking turns are universal features of human communication and they occur in all cultural and linguistic backgrounds (Kita, 2009; Levinson, 2006). In young children, gestural expression emerges before language acquisition (Bates & Dick, 2002) and it has further been shown that the use of gestures predicts later language development (Acredolo & Goodwyn, 1988; Rowe & Goldin-Meadow, 2009). Moreover, humans seem to have a predisposition for turn-taking. Already at the age of 6 months, children follow the flow of the conversation with their eye gaze (Augusti, Melinder, & Gredeback, 2010). Levinson (2006) argued that human beings have an inherited social and interactive orientation. He refers to it as the “interaction engine” which might be the source of human turn-taking. In the present thesis, I address the questions whether these entities of human communication are affected in patients with an acquired language disorder. The following chapter provides a broad introduction into the field of research. It starts with the interactional infrastructure, introducing dialogue, gestures, and turn-taking. Subsequently, aphasia is introduced as an acquired language disorder with a focus on its multimodal aspects. In the final section of the chapter the assessment of visual exploration by means of eye movement recordings is established as a valid and reliable technique to investigate co- speech gesture perception and the on-going processing of speaking turns. 1.1 Dialogue A dialogue can be defined as the conversational exchange between two or more interlocutors. In its basic form, the dyadic dialogue involves two interlocutors, the speaker and the addressee (i.e., the listener). Conversational exchange through dialogue is a fundamental form of language use. Typically, it is the first modality of language acquisition throughout children’s development and for some people, and even whole societies, it remains their only modality of language use (Clark & Wilkes-Gibbs, 1986). A dialogue constitutes a collaborative process in which the conversation partners need to negotiate who is to talk at which time. It is believed that human beings rely on an inherent
  • 12. Introduction 2 turn-taking system, which organises their opportunities to speak through social interaction (Sacks, Schegloff, & Jefferson, 1974). During the speaker’s utterance the listener is signalling the speaker by so called back-channel responses (head-nods, yes’s, and other interjections) that he or she understood what has been said (Duncan, 1972; Goodwin, 1981; Schegloff, 1982). Moreover, the listener is integrating audio-visual information from the speech signal but beyond that, he is also integrating the paraverbal and non-verbal behaviour made by the speaker. The wide range of non-verbal behaviour also includes gestural movements of the hands and arms which are probably the most studied expressive forms of nonverbal behaviour (Kendon, 1980). The following two sections of this chapter provide an introduction into the field of gesture and turn-taking research. 1.2 Gesture We are discussing a phenomenon that often passes without notice, though omnipresent. If you watch someone speaking, in almost any language and under nearly all circumstances, you will see what appears to be a compulsion to move the hands and arms in conjunction with the speech. (McNeill, 2000, p. 1) 1.2.1 Why do people gesture and what makes a movement a gesture? The origin of gesturing in human interaction is still mysterious. It has been speculated whether gesturing in humans is based on social learning, i.e., whether we gesture because other people do so. Contrary to this assumption, it has been found that also congenitally blind children produce gestures (Iverson & Goldin-Meadow, 1998). An alternative interpretation is that speakers produce gestures in order to express themselves more clearly to their interaction partners. However, it has been shown that people also gesture if there is no visual contact with their interaction partner (e.g., conversation over the phone) or if the interaction partner is blind (Iverson, Tencer, Lany, & Goldin-Meadow, 2000; McNeill, 1992). Thus, gesturing does not only depend on the visual presence of an addressee but also on the speaking process itself. Indeed, it has been shown that human gestures serve both communication (Goldin-Meadow, Alibali, & Church, 1993; McNeill, 1992) and speech production (Krahmer & Swerts, 2007; Krauss & Hadar, 1999). This means that gestures can
  • 13. Introduction 3 supplement (e.g., shrug to express one’s uncertainty) or even substitute direct speech (e.g., victory sign), but they also facilitate lexical retrieval and complement speech prosody. After having introduced its function, one should ask under which conditions movements are perceived as gestures. Kendon (2004) suggested that gestures refer to all visible actions of body parts when they are used as an utterance. Kendon and others also stressed the communicative function of gestures (Kendon, 1994; Lausberg, 2011; McNeill, 1992; Thompson & Massaro, 1994). In a recent study, Novack, Wakefield, and Goldin-Meadow (2016) reported that hand movements are more likely to be perceived as gestures when they are accompanied by speech. This form of gestural hand movements, from here on referred to as co-speech gestures, builds one focus topic of the present thesis. When it is later referred to co-speech gestures, I mean gestures which occur concomitant or simultaneous with speech, irrespective of whether they convey communicative meaning. As outlined above, gestures serve diverse functions. This does not only include communicative meaning, but also the facilitation of speech production. 1.2.2 The classification of co-speech gestures The first systematic classification of spontaneous co-speech gestures goes back to Efron (1941/1972). His classification system provided the basis for later approaches to classify different gesture types (Ekman & Friesen, 1969; Lausberg, 2011; McNeill, 1992; Sekine, Rose, Foster, Attard, & Lanyon, 2013). The aim of gesture classification is to ascribe a communicative function to a particular gestural movement. However, the classification is difficult because in contrast to speech, gestural expressions are most often idiosyncratic. Gestures are idiosyncratic because there exists no common lexicon which would define how gestural movements need to be performed. Furthermore, co-speech gestures are hand actions that are almost never used without language. Consequently, many forms of gestural expression do not have a clear communicative function without the language context. Therefore, people cannot unambiguously recognize the communicative intention of co- speech gestures without speech (Krauss, Morrel-Samuels, & Colasante, 1991).
  • 14. Introduction 4 In the conducted studies, we did not classify the communicative function of co-speech gestures, although the reader should be briefly introduced to the most prominent categories, which found their way into different classification systems. Iconic gestures typically depict the shape of an object (iconographic gestures) or the trajectory of a movement (kinetographic gestures). This category has a close relation to the verbal utterance and its meaning is often redundant to it. Deictic gestures are pointing gestures most often performed with the index finger which refer to a visible or an invisible object (e.g., an image of an abstraction). Emblems are gestures which convey culturally conventionalized and language-specific meaning (e.g., thumb up). Finally, beats or batonic gestures represent rhythmic hand movements that go along with the pulsation of speech. The typical beat is a short and quick flip of the hand or fingers, back and forth, or up and down (McNeill, 1992). Besides the classification of its communicative function, Kendon (1980; 2004) distinguished in his seminal work different phases within the gestural movement: (1) preparation phase, movement away from the resting position in preparation of the next phase; (2) stroke phase, the main phase of a gesture unit when the movement excursion is closest to its peak; (3) post-stroke hold, motionless phase which potentially occurs before and after a stroke; and (4) recovery or retraction phase, during which the hands are going back to the resting position. It should be noted that our studies on gesture perception (study 1 and study 2) covered different phases of the gesture unit. As introduced in section 3.1, study 1 included the presentation of isolated gestural movement on a trial-by-trial basis. Thus, the whole gesture unit with all its phases was considered for the analysis. This lies in contrast to study 2, where we examined the visual exploration of spontaneous dialogue, thereby restricting the analysis to the stroke phase of the gesture unit. The reason for this is that the stroke phase is the most obvious part of the gestural movement which can be defined with the highest reliability.
  • 15. Introduction 5 1.2.3 Neural correlates of gesturing There has been growing interest in the neural bases of gesturing because of conflicting views regarding the relationship between speech and gestures. Whilst some researchers assume that speech and gestures rely on different communication systems, others posit that speech and gesture rely on a unitary communication system. According to the first view, speech and gestures are tightly interacting, yet independent entities (Feyereisen, 1987; Hadar, Wenkert- Olenik, Krauss, & Soroker, 1998; Levelt, Richardson, & La Heij, 1985). Regarding the latter view, it has been proposed that gestures represent visible action as utterance (Kendon, 2004), or even thoughts and mental images (McNeill, 1992). Given the assumption that speech and gesture rely on the same communication system, they would also be processed by shared neural networks. Thus, gesturing and speaking would both be affected if their common neural network is damaged due to a brain lesion. In the last ten years, neuroimaging studies tried to identify functional neural networks responsible for gesture perception and gesture production. Indeed, several studies have reported that brain areas associated with language functions also respond when people perceive co-speech gestures (Holle, Obleser, Rueschemeyer, & Gunter, 2010; Straube, Green, Bromberger, & Kircher, 2011; for reviews see also Andric & Small, 2012; Marstaller & BurianovĂĄ, 2014). Furthermore, it has been shown that the lateral temporal cortex, an area which is part of the language system, responds more strongly if speech is accompanied by co-speech gestures (Beauchamp, Lee, Haxby, & Martin, 2002). Moreover, ÖzyĂźrek, Willems, Kita, and Hagoort (2007) showed by means of electroencephalography (EEG) that both verbal and gestural mismatch during sentence processing elicit comparable event-related potentials, which thus suggests that the brain integrates both types of information simultaneously. In addition, co-speech gestures also activate regions outside the typical language areas in the parietal lobe and in the premotor cortex (Dick, Goldin-Meadow, Hasson, Skipper, & Small, 2009; Green et al., 2009; Holle, Gunter, RĂźschemeyer, Hennenlotter, & Iacoboni, 2008). These areas seem to be involved in the processing of hand actions and action understanding (Andric & Small, 2012).
  • 16. Introduction 6 Because of technical restrictions (i.e., in a contemporary MRI scanner participants cannot freely move their arms and hands), neuroimaging studies on gesture production are scarce. Marstaller and Burianova (2015) approached the problem presenting nouns that referred to a tool (e.g., scissors), which is commonly used unimanual. The subjects were instructed to produce a corresponding action verb, an action gesture, or a combination of both. The authors reported that co-speech gesture production seems to be mainly driven by the same neural network as language production. Additional evidence is provided by a study which applied near-infrared spectroscopy. Oi, Saito, Li, and Zhao (2013) found gesture dependent modulation of brain activity in the language network during story retelling. Taken together, previous research indicates that the perception and the production of co- speech gestures elicit brain activity in a neural network which overlaps with the neural network for language processing. Figure 1, obtained from Dick, Mok, Beharelle, Goldin- Meadow, and Small (2014), illustrates the overlapping activation found for gesture and language perception.
  • 17. Introduction 7 Figure 1. Activation peaks in the left inferior frontal and the posterior temporal cortex obtained from studies which investigated language perception (i.e., how semantic ambiguity is resolved during language comprehension) and from studies investigating gesture perception (i.e., how gestures contribute to the resolution of semantic ambiguity). Source: (Dick et al., 2014, p. 902). 1.3 Turn-taking Just as it is desirable to avoid bumping into people on the street, it is desirable to avoid in conversations an inordinate amount of simultaneous talking (Duncan, 1972, p. 283) Dialogue is characterized by the regular exchange of speaking turns between the interlocutors. Duncan (1972) suggested that there is a regular mechanism in our culture for managing the taking of speaking turns. This mechanism is described as turn-taking and it allows the smooth and appropriate exchange of speaking turns (Goffman, 1963; Yngve, 1970). Sacks et al. (1974) reported in their seminal article the following important observations concerning turn-taking; one party talks at a time, turn order is not fixed but
  • 18. Introduction 8 varies, a speaker may select the next speaker (e.g., addressing somebody with a question), or the turn can be taken by the next speaker at the next possible completion. Probably, their most important observation was that between single turns (i.e., from one turn to the next), there are no or only slight inter-speaker gaps or inter-speaker overlaps. However, the minimal vocal response time in human communication would be around 200 ms (Fry, 1975; Izdebski & Shipp, 1978). The production of a simple utterance during picture naming even takes 600ms (Indefrey & Levelt, 2004). Therefore, the projection theory, which goes back to Sacks et al. (1974), assumes that the next speaker is able to predict when the current speaker will finish based on the recognition of the linguistic units within the turn. In contrast to this view, representatives of the reaction or signal theory (Duncan, 1972; Kendon, 1967; Yngve, 1970) assume that the next speaker reacts to a turn-yielding signal, which is provided by the current speaker near the end of the utterance. Turn-yielding signals are described as discrete behavioural cues, such as the intonation of the speaker’s voice, lengthening of the final syllable, gestures, stereotypic expression (e.g., you know…), or syntactic cues (e.g., the completion of a grammatical clause). Heldner and Edlund (2010) did an extensive analysis of telephone and face-to-face interactions, in 370 pairs of Dutch, Swedish and Scottish English speakers. Their findings suggest that turn-taking is not as precise as it was claimed by the projection theory showing that overlaps occurred in 40% of all between-speaker intervals (i.e., inter-speaker gaps and inter-speaker overlaps). Moreover, the proportion of between- speaker intervals that occurred within less than 200ms was 55% to 59%, which would speak against the reaction theory. The proportion of the between-speaker intervals, which involved a gap or an overlap long enough to consider a reactive response, was only 41% to 45%. The authors concluded that they could neither rule out projected responses, nor reactive ones.
  • 19. Introduction 9 1.4 Aphasia: A multimodal disorder 1.4.1 Definition, phenomenology, and neuroanatomy Aphasia is an acquired language disorder which occurs as a consequence of brain damage to the language dominant hemisphere. The incidence of aphasia in the Swiss population is 0.43 per 1000 citizens, which corresponds to 3440 new cases per year. The estimated number of patients who are living with aphasia in Switzerland is about 5000 (Koenig-Bruhin, Kolonko, At, Annoni, & Hunziker, 2013). The disorder generally affects different capabilities in all language modalities, i.e., speaking and listening (language comprehension), reading and writing. The signs and symptoms of language impairment depend on the localization of circumscribed brain lesions. Therefore, aphasia has attracted the interest of different disciplines (e.g., neurology, psychology, linguistics, computational science, or philosophy), because it provides a model to test their theories of mind and brain (Damasio, 1992). Aphasia is a central disorder of language processing and has to be distinguished from motor speech disorders (e.g., dysarthria), which are characterized by a poor articulation. In contrast, aphasia affects whole components of the language system (phonology, lexicon, syntax, and semantics). Spontaneous speech in aphasic patients is characterized by word finding difficulties, poor wording (mistakes in word choice), and morpho-syntactic mistakes in the sentence structure. Word finding deficits occur in all variants of the disorder. Patients make longer pauses which they fill with interjections (e.g., uh, er, or um), they constantly repeat utterances (i.e., perseverations), use empty phrases, or they completely abort unfinished sentences. Aphasic patients also produce unintended words that are semantically related (e.g., fridge for toaster) or unrelated to the intended word, which is also referred to as semantic paraphasia. Patients with aphasia exchange speech sounds within the word (phonological paraphasia), or they modify the word form until it becomes unrecognizable (neologisms). The morpho-syntactic structure of their sentences can be described as either agrammatical, or as paragrammatical. On the one hand, patients who show an agrammatic sentence structure omit function words and inflection forms. They have a reduced
  • 20. Introduction 10 availability of verbs, their sentences are shorter and the syntax is simplified. Moreover, they have difficulties with the word order. On the other hand, patients with paragrammatism use excessive sentence structures. Their sentences are complex and marked by erroneous doublings of parts of the sentences (Weniger, 2012). In most cases, aphasia is the consequence of brain damage to the left cerebral hemisphere. This is because language functions tend to be lateralized to the left hemisphere in most right-handed (90%) and left-handed (70%) individuals (Knecht et al., 2000). Aphasia is commonly caused by a cerebrovascular insult (80%), but it can occur as a result of virtually any other neurological incident like traumatic brain inquires, brain tumours, cerebral abscess, or progressive brain diseases (e.g., dementia) (Koenig-Bruhin et al., 2013). Two types of cerebrovascular insult can be distinguished: ischemic stroke (85%) and hemorrhagic stroke (15%) (Hickey, 2003). In ischemic stroke, the blockage of a blood supplying artery causes a shortage in oxygen in the supplied brain area which further leads to the death of the neurons in the affected brain tissue. In hemorrhagic stroke, an intracerebral bleeding causes the cell necrosis in the affected brain tissue. The first functional-anatomical models of aphasia can be traced back to the pioneer case studies conducted by Paul Broca (1861) and Carl Wernicke (1874). Broca treated a patient who could only produce the syllable tan. Based on the autopsy of this patient Broca concluded that the crucial area for speech production has to be located in the inferior frontal gyrus. Wernicke connected the evidence presented by Broca with his own observation that patients with brain lesions in the superior temporal gyrus produce fluent but meaningless speech. Wernicke suggested that word meaning and the motor plans for articulation are represented in distinct areas in the inferior frontal and the superior temporal lobe, respectively. Wernicke initially believed that the two areas are connected over association fibers fibrae propriae, which run through the insular cortex. Later, he accepted Constantin von Monakow’s finding that the arcuate fasciculus is connecting Wernicke’s with Broca’s area (Catani & Mesulam, 2008; Geschwind, 1965; Krestel, Annoni, & Jagella, 2013). Contemporary models of speech processing (Friederici, 2011; Hickok & Poeppel, 2007; Vigneau et al., 2006) assume a dual route model consisting of ventral and dorsal pathways
  • 21. Introduction 11 (see Fig. 2). The ventral pathways reach from the auditory cortex to the temporal pole and over the uncinate fasciculus and the extreme fiber capsule to the inferior frontal lobe and the frontal operculum. The ventral pathways are needed for the association of the auditory speech signal with conceptual semantic knowledge (e.g., language comprehension). The dorsal pathways reach from the temporo-parietal junction over the longitudinal fasciculus and the arcuate fasciculus to the premotor cortex and the inferior frontal gyrus. The dorsal pathways are thought to serves as an integrative network for sensory-motor processes (e.g., verbal repetition) (Friederici, 2011). Figure 2. It illustrates schematically the structural connections within the language network. The dorsal pathways (longitudinal fasciculus, arcuate fasciculus) reach from the superior temporal lobe to the premotor cortex and to Broca’s area. The ventral pathways (extreme fiber capsule, uncinate fasciculus) reach from the temporal pole to Broca’s area and to the frontal operculum (FOP), respectively. Source: (Friederici, 2011, p. 1360). 1.4.2 Aphasia diagnosis and clinical syndromes The Aachener Aphasia Test (AAT) is a widely used test for aphasia diagnosis, severity judgment, and syndrome classification in German-speaking populations (Huber, Poeck, & Willmes, 1984). This test battery is based on well-defined linguistic criteria, fulfils
  • 22. Introduction 12 psychometric requirements, and provides validated and standardized test scores for the target population. The classification of different aphasia syndromes, which is also part of the AAT, is based on characteristic symptoms in spontaneous speech. After cerebrovascular aetiology, i.e., in patients after an ischemic stroke to the middle cerebral artery, it is an old tradition in aphasiology to differentiate four standard syndromes: anomic aphasia, Broca’s aphasia, Wernicke’s aphasia, and global aphasia (Weniger, 2012). The leading symptom for anomic patients is a word finding deficit. Broca’s aphasics typically show agrammatic speech production. Wernicke’s aphasics show paragrammatic sentence structures and global aphasia is characterized by speaking automatisms. The caveat of the syndrome classification is that many studies found that the lesion location often does not correspond well with the syndrome complex (Dronkers & Larsen, 2001; Penfield & Roberts, 2014). Therefore, syndrome classification has become less important in contemporary research and in the current clinical practice. Nevertheless, the AAT is still a widely used instrument in research and in clinics for aphasia diagnosis and the assessment of aphasia severity. The AAT consists of an evaluation of spontaneous speech and of five subtests (Token Test, Naming, Comprehension, Repetition, and Written Language). For the studies included in the present thesis we decided to select two subtests of the AAT, the Token Test and the Written Language Test. Willmes, Poeck, Weniger, and Huber (1980) demonstrated that the discriminative validity of the two subtests is as good as the discriminative validity of the whole test battery. As its name implies, the Token Test consists of a selection of tokens (circles and squares) in two different sizes and five different colours. The participant is instructed to point on the token which correspond with the verbal command (e.g., show me the big red circle) given by the experimenter. The Token Test was originally introduced as a sensitive method to detect aphasic impairments of auditory language comprehension (De Renzi & Vignolo, 1962). Interestingly, later studies found that the Token Test is equally powerful in detecting patients with Broca’s aphasia, as it is in detecting patients with Wernicke’s and global aphasia (Cohen, 1976; Orgass & Poeck, 1966; Poeck, Kerschensteiner, & Hartje, 1972). The Written Language Test consists of three parts: reading of single words
  • 23. Introduction 13 and sentences, a part where the participant has to compose words out of letters and compound words out of single words, and a writing from dictation part. 1.4.3 Aphasia, apraxia, and gesture The above-mentioned findings from neuroimaging studies (see section 1.2.3) demonstrate that gesture and language processing elicit brain activity in overlapping neural networks. These findings suggest that gesture and language rely on the same, or at least on shared, neural processing. Kimura (1973) observed that right-handers, who are thought to process language mainly with their left brain hemisphere, predominantly use the right hand when gesturing. The author concluded that there exists a common control system for free movements and speaking, which is lateralized to the left brain hemisphere for most people. Kimura’s conceptualisation was later corroborated by two studies in aphasic patients (Cicone, Wapner, Foldi, Zurif, & Gardner, 1979; Glosser, Wiener, & Kaplan, 1986). Cicone et al. (1979) compared co-speech gestures during spontaneous communication in two patients with Broca’s aphasia with those produced by two patients with Wernicke’s aphasia. The authors reported that the gesture production closely matched the speech output. In the patients with Wernicke’s aphasia, the quantity of speech and gesture production resembled those of healthy controls, but it was unstructured and difficult to understand. Patients with Broca’s aphasia showed reduced speech and gesture production, but the clarity of the produced gestures was higher. According to the authors’ opinion, the clarity of the gestures even surpassed those of healthy controls. In a later study, Glosser and colleagues (1986) found a negative correlation between language impairments and gestural complexity in aphasic patients. Patients with more severe impairments produced less complex gestures. Cicone et al. (1979), Glosser et al. (1986), and later also McNeill (1992) concluded that the communicative competence to use co-speech gestures has to be affected in aphasic patients, because gestures and speech rely on a same communication system. This view has been challenged by other studies which found that patients with aphasia improve their communication if they use gestures (Behrmann & Penn, 1984; Herrmann,
  • 24. Introduction 14 Reichle, Lucius-Hoene, Wallesch, & Johannsen-Horbach, 1988; Lanyon & Rose, 2009; Rousseaux, Daveluy, & Kozlowski, 2010). For example, Herrmann et al. (1988), who analysed the communication between aphasic patients and their relatives, found that the patients used more speech-replacing gestures than their interlocutors. In another study, Behrmann and Penn (1984) did not find a relationship between the severity of language impairments and gesture capabilities as it was suggested by Glosser el al. (1986). Furthermore, Lanyon and Rose (2009) reported that even severely affected patients with almost no speech production capabilities were able to use communicative gestures. A potential source of inconsistency is that the above cited studies did not assess the co- occurrence of apraxia. Apraxia is an impairment of the ability to perform skilled, purposive limb movements (Ochipa & Gonzalez Rothi, 2000). Similar to aphasia also apraxia is caused by left hemispheric brain lesions (Liepmann, 1905). Previous research found that left-sided lesions can cause apraxia without aphasia or the reverse dissociation (Kertesz, Ferro, & Shewan, 1984). However, aphasia is far more common to occur without apraxia, than vice versa (Goldenberg, 2008). Patients with apraxia are impaired in gesture imitation (Buxbaum, Kyle, & Menon, 2005; Goldenberg, 2008) and in performing pantomimes on verbal command (Vanbellingen et al., 2010). Borod (1989) reported that praxis skills are positively correlated with spontaneous gestural communication in patients with limb apraxia. In contrast, Feyereisen, Barter, Goossens, and Clerebaut (1988) found a negative correlation between the apraxia test score and the use of gestures in aphasic patients. Patients with more severe apraxia, who also exhibited higher aphasia severity, produced more gestures. In a more recent study, Hogrefe, Ziegler, Weidinger, and Goldenberg (2012) could show that apraxia, rather than aphasia, affects the comprehensibility of the gestural expression. For this purpose, videotaped cartoon narratives obtained from patients with severe aphasia and different levels of apraxia severity were presented without sound to naĂŻve raters. For each narration, the raters, who were familiar with the original cartoons, were asked to indicate, which story had been told and which aspects of the story they recognized. The authors reported positive correlations between the identification rate and the ratio of the
  • 25. Introduction 15 recognized features from the narrations with the apraxia test scores, but not with the scores from the aphasia test battery (i.e., subtests of the AAT). Taken together, previous research does not present a clear-cut relationship between language impairments in aphasic patients and their ability to produce co-speech gestures. Although, there seems to be a vague connection between apraxia severity and the ability to perform communicative gestures, this relationship is probably restricted to patients with severe aphasia. Since perception and action (e.g., co-speech gesture production) are tightly related processes (Rizzolatti, Fadiga, Gallese, & Fogassi, 1996), it is important to also illuminate gesture perception in aphasic patients. Hereby, the fundamental question is whether the perception of co-speech gestures improves language comprehension in aphasic patients. Records (1994) investigated the contribution of referential gestures on speech perception in aphasic patients. In her study, she presented the information either auditorily (target word), visually (referential gesture towards target picture) or as a combination of both modalities (target word and referential gesture). The authors found that patients with lower language comprehension abilities relied more on the gestural information. Other studies combined gesture and lexical learning therapy in mild (Kroenke, Kraft, Regenbrecht, & Obrig, 2013) or gesture and naming therapy in severe cases of aphasia (Marshall et al., 2012). In these studies, patients perceived a target word or picture together with a referential gesture. In the following parts of the experiments, they were requested to repeat the word or to name the picture while imitating the gesture. The results of these studies indicate that lexical learning and naming, trained together with a referential gesture, is improved. In a nutshell, there is evidence that the perception of co-speech gestures may facilitate speech perception and thus ultimately lead to deeper encoding of information in aphasic patients as it has been shown in healthy participants (Cohen & Otterbein, 1992; Pierre Feyereisen, 2006). However, in contrast to previous research on gesture production, it has not been studied whether there are differences concerning the visual perception of co- speech gestures between aphasic patients and healthy controls.
  • 26. Introduction 16 1.4.4 Turn-taking in aphasia Previous studies on the interactional structure of aphasic conversation (Holland, 1982; Prinz, 1980; Schienberg & Holland, 1980; Ulatowska, Allard, Reyes, Ford, & Chapman, 1992) showed that the patients still adhere to the basic rules of turn-taking (e.g., only one speaker at a time) that were suggested by Sacks et al. (1974) Schienberg and Holland (1980) found that turn-taking behaviour remained intact. They analysed the conversation between two aphasic patients showing that patients even used repair strategies for turn-taking errors when both speakers were talking at the same time. The authors concluded that a naĂŻve observer, who is not familiar with the spoken language, would not even notice patients’ language production deficits. Ulatowska and colleauges (1992) went one step further by systematically comparing dyads between aphasic patients and healthy controls in a role-play set up. The participants were engaged in a conflict about dissatisfaction with a product or a service between a customer and a salesperson. The authors analysed a total of 18 dyads: four healthy control-healthy control, eight healthy control-aphasic patient, and six aphasic patient-aphasic patient dyads. Interestingly, the authors found that aphasic patients behaved comparable to healthy controls at the discourse level in terms of turn types and the range of speech acts. A current model of turn-taking assumes that listeners try to determine the speaker’s intentions to predict the unfolding of the speaker’s utterance (Pickering & Garrod, 2013). In other words, listener build a representation of what they believe their partner will say in order to plan their own contribution. According to this model, listeners have to make predictions about the content and about the timing of the speaker’s utterance. The authors suggested that listeners rely on language comprehension to make predictions about the content and on the speaker’s speech rate for the exact timing. Language comprehension involves the extraction of phonological, syntactic, and most importantly semantic information. However, the processing of semantic, syntactic, and phonological information is disturbed in aphasic patients (Butler, Ralph, & Woollams, 2014; Caplan, Waters, Dede, Michaud, & Reddy, 2007; Caramazza & Berndt, 1978; Jefferies & Ralph, 2006). One consequence of their aphasic impairments might be that patients have difficulties to predict
  • 27. Introduction 17 the unfolding of the speaker’s utterance and thus have problems to accurately time their own speech act. Indeed, Schienberg and Holland (1980) reported that the only difference among dyads between healthy participants and those between aphasic patients was the length of the inter-speaker gaps. Hence, longer inter-speaker gaps in dyads between aphasic patients can be taken as an indication for timing difficulties. Another consequence of their linguistic impairments could be that aphasic patients miss opportunities for their own contributions (i.e., the turn relevance place) because they have difficulties to recognise the speaker’s intentions. Ulatowska et al. (1992) found some indications for this assumption showing that the rate of conversational exchange (i.e., number of turns per minute) was slightly higher for the healthy control-healthy control dyads. In sum, even if previous research provides some indications, the ability to make correct predictions about upcoming turn transitions has not been systematically studied in aphasic patients. 1.5 The assessment of visual exploration 1.5.1 The function and the recording of eye movements Humans constantly move their eyes when capturing a visual scene because the degree of acute vision covered by the human eye is only about two degrees of visual angle. The reason for this is the distribution of photoreceptor cells across the human retina. The retina is a light-sensitive layer of tissue which contains two types of photoreceptor cells, the rods and the cones. The rods are very light sensitive photoreceptor cells which are important for the scotopic vision (i.e., vision by darkness). The cones are responsible for the colour vision and the visual acuity (photopic vision). The fovea is the region on the retina that enables acute vision. This area contains only cones and barely any blood vessels, and covers approximately two degrees of the retina. Hence, the main function of eye movements during visual exploration is the alignment of the fovea with the objects of interest. Fixations are phases of spatially relatively stable eye gaze positions. This is the time when visual processing takes place (Ilg & Thier, 2003). Saccades are quick movements of both eyes between two fixations. The purpose of saccades is to align the fovea with a visual target. During saccades the sensitivity of visual processing is reduced due to the phenomenon of saccadic suppression
  • 28. Introduction 18 (Matin, 1974; Zuber & Stark, 1966). In sum, visual exploration consists of a continuous alternation between fixations and saccades. Contemporary eye tracking systems allow a precise estimation of the eye gaze position in space and time. The eye tracking device used for the current studies is a binocular infrared eye-tracker (RED, SensoMotoric Instruments GmbH, Teltow, Germany) with a temporal resolution of 250 Hz and a maximal spatial resolution of 0.03°. This device contains two infrared cameras which detect the pupil and the corneal reflection of the eye. Based on this information the system is able to estimate the exact pupil position in relation to reference points, which are presented on the screen during the calibration procedure. A major advantage of this system is that is does not require a fixed headrest. This is very convenient for the assessment of eye-movements in stroke patients. In their seminal paper, Land and Hayhoe (2001) demonstrated that the eyes usually fixate the manipulated object during activities of daily living (e.g., preparing a cup of tea). Sometimes, the eyes moved on to the next object in the sequence before the completion of the preceding action. Others found that saccadic eye movements occur before hand movements (Angel, Alston, & Garland, 1970; MĂźri, Kaluzny, Nirkko, Frosch, & Wiesendanger, 1999). This suggests that eye movements are controlled top-down during complex tasks, which afford the parallel processing of information (e.g. the position of the cup in relation to the pot). In humans the frontal eye field (FEF) is an essential structure for the voluntary control of eye movements during visual exploration (MĂźri & Nyffeler, 2008). Fortunately, oculomotor deficits, which occur after a lesion to the cortical oculomotor regions (e.g., FEF), seem to recover very rapidly in patients with brain damage (Leigh & Zee, 2015). For instance, Schiller, True, and Conway (1980) reported that only a bilateral lesion of the FEF and an additional lesion to the superior colliculus result in a persisting impairment of saccade parameters. Therefore, the analysis of eye movements has proven to be a valid tool in order to study the processing of linguistic information not only in healthy participants (Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995), but also in patients with brain damage (Forde, Rusted, Mennie, Land, & Humphreys, 2010; Sheppard, Walenski, Love, & Shapiro, 2015; Thompson & Choy, 2009). Thus, eye movement analysis is a compelling method to gain new
  • 29. Introduction 19 insights into the parallel processing of speech and co-speech gestures in aphasic patients who are in a sub-acute to chronic state. 1.5.2 Visual exploration of co-speech gestures Typically, studies on gesture perception included experimental conditions where a sentence was presented either with, or without a representational gesture (Cohen & Otterbein, 1992; Feyereisen, 2006; Records, 1994). These studies demonstrated that sentences paired with co-speech gestures were better recalled, a phenomenon, which is also referred to as the mnemonic effect of gestures. However, there are only few studies which investigated the visual exploration of gestural movements by means of eye movement recordings. In their pioneer work, Gullberg and Holmquivst (1999) let participants, who were wearing a head-mounted eye tracker, listen to the cartoon narrations made by another study participant. They found that the speaker’s face was fixated much more often (90-95%) than the gestures (2-7%). These findings were later confirmed in a similar study by Beattie, Webster, and Ross (2010). Gullberg and Holmqvist (1999) also reported two conditions under which participants seem to be more likely to fixate the speaker’s gestures: First, gestures produced in the vertical periphery were more often fixated than gestures performed centrally, and second, if the speakers fixated their own gestures, the listeners tended to follow them with their gaze. In a follow-up study, Gullberg and Holmqvist (2006) found comparable visual exploration of co-speech gestures in a face-to-face and a video condition. This finding corroborates the empirical validity of video guided eye tracking. In a pilot study for the aphasia and gesture project, which also includes the work conducted for the present thesis, our group was the first to study visual exploration of gestural movements by means of eye movement recordings in stroke patients (Vanbellingen et al., 2015). In this study, short videos of communicative and meaningless gestures were presented without speech while eye movements were recorded in patients with left hemispheric brain damage (LHD). Vanbellingen and colleagues (2015) could show that in comparison to healthy controls, LHD patients fixated the face and the gesturing hand less frequently during the visual exploration of tool-related and emblematic gestures. Moreover,
  • 30. Introduction 20 they found that fixation duration on tool-related gestures was significantly correlated with their imitation performance. 1.5.3 Eye movement analysis and turn-taking In the past few years, a new experimental paradigm has been established to study the cognitive process of turn-taking from a third person perspective (Holler & Kendrick, 2015). This paradigm requires a non-involved observer to watch video vignettes of pre-recorded dialogue while their eye movements are measured. The analysis of the precise timing of eye movements in relation to the turn transitions between speaking actors allows some conclusions about the processing and the underlying mechanisms of human turn-taking. There are studies showing that already 6 to 12 months old children shift their gaze according to the flow of the conversation between the current and the next speaker (Augusti et al., 2010; Keitel, Prinz, Friederici, von Hofsten, & Daum, 2013). Furthermore, this paradigm has been applied to assess whether non-involved observers are able to predict upcoming turn transitions during video presentation. Similar to the minimal vocal response time introduced in section 1.3, previous research suggests that it takes at least 200 ms to plan and execute a saccadic gaze shift (Becker, 1991; Salthouse & Ellis, 1980; Westheimer, 1954). Hence, a gaze shift that occurs within the first 200 ms after the completion of a speaker’s turn has been planned prior to its end. This means that gaze shifts, which are initiated within this time window (see, Fig. 3), can be taken as indicators for the human ability to project turn transitions, as it was suggested first by Sacks and co-workers (1974)
  • 31. Introduction 21 Figure 3. Illustration of the crucial time window for the start of projected gaze in the third person eye-tracking paradigm. Unfortunately, most studies analysed the timing of the observer’s gaze shifts in relation to the beginning of the next speaker’s speech (Keitel & Daum, 2015; Keitel et al., 2013; von Hofsten, Uhlig, Adell, & Kochukhova, 2009). This means that these studies did not take into account the length of the inter-speaker gap. For instance, Keitel and colleagues presented video material including inter-speaker gaps lasting on average between 860 and 930 ms. As introduced in section 1.3, inter-speaker gaps longer than 200 ms indicate that the next speaker reacted to rather than projected the end of the current turn. Likewise, the gaze shift from a passive observer, which occurs 200 ms after the completion of the current turn, but still within the inter-speaker gap, has to be considered as a reactive gaze shift. This even applies to gaze shifts which occur before the next speaker starts to speak. In the current work, we tried to address this methodological pitfall by analysing turn end projection with respect to the end of the current turn.
  • 32. Rationale and aims 22 2 Rationale and aims The aim of the present thesis was to investigate co-speech gesture perception and the processing of speaking turns in patients suffering from post-stroke aphasia. For this purpose, three different studies were conducted on this topic. The first study focused on the visual perception of single co-speech gestural movements, which were presented on a trial-by-trial basis. In this study, it was investigated to which extent aphasic patients can benefit from multi-modal information provided by speech and co-speech gestures. Moreover, it was of interest how long aphasic patients would fixate the gesturing hands in comparison to healthy control participants. In the second study, it was assessed how aphasic patients perceive co-speech gestures that occur during dyadic conversations. Do co-speech gestures of the speaker have an impact on the gaze direction (towards the speaker or the listener) and further on the amount of fixation of other body parts? Furthermore, we investigated whether aphasic patients develop compensatory visual exploration strategies in order to overcome impairments of auditory language comprehension. In the third study, we analysed the frequency and the precise timing of turn transition dependent eye movements during dialogue observation. The rational of this study was to investigate whether the gaze shift behaviour in aphasic patients reveals indications for difficulties with the prediction of upcoming turn transitions. On this purpose, it was further examined whether the lexico- syntactic complexity and the intonation during the speaker’s turn would predict gaze shift probability and latency on the following turn transition. Previous research suggests that the perception of co-speech gestures during treatment may have positive effects on naming (Marshall et al., 2012) and lexical learning (Kroenke et al., 2013). Moreover, it has been found that patients with impaired language comprehension seem to rely more on the gestural information (Records, 1994). Supposing that the perception of co-speech gestures improves language comprehension in aphasic patients, it is relevant to study whether aphasic patients and healthy controls look at gestures differently. The analysis of visual exploration by means of eye movement recordings provides insights for the understanding of gesture processing. However, the visual exploration of co-speech
  • 33. Rationale and aims 23 gestures has not been studied in aphasic patients, most probably due to technical restrictions. The analysis of fixations on moving targets (e.g., gestural movements) requires software which allows the modelling of dynamic regions of interest. Another aspect of social interaction which has only been sparsely studied is turn-taking in aphasic patients. A series of older studies suggests that the rate of the conversational exchange (i.e., number of turns per minute) seems to be comparable in aphasic patients and healthy participants (Holland, 1982; Prinz, 1980; Schienberg & Holland, 1980; Ulatowska et al., 1992). Current research indicates that healthy participants can predict the unfolding of a speaker’s turn in order to plan their own speech act (Garrod & Pickering, 2015; Holler & Kendrick, 2015). According to the introduced model of turn-taking (see section 1.4.4), it is likely that language comprehension impairments in aphasic patients hinder accurate predictions of upcoming turn transitions. Open questions that need to be addressed: • Is additional non-verbal information provided through co-speech gestures beneficial for comprehension in aphasic patients? (study 1) • Are there different ways how aphasic patients perceive co-speech gestures? For instance, do they deploy more attention to the gestural movements? (study 1 and study 2) • How do co-speech gestures modulate visual perception during the observation of dialogue? Do co-speech gestures have an impact on gaze direction (towards the speaker or the listener)? (study 2) • Is there a difference with respect to the frequency and the timing of turn transition related gaze shifts between aphasic patients and healthy controls? Does the performance depend on the lexico-syntactic complexity and the intonational information provided during video observation? (study 3)
  • 34. Rationale and aims 24 Limitations The studies included in the present thesis focused on the perception of co-speech gestures (study 1 and study 2). Therefore, no conclusions can be made with regard to gesture production abilities in aphasic patients. Furthermore, we did not distinguish between co- speech gestures which convey communicative meaning and co-speech gestures which only facilitate language production of the speaker (e.g., batonic gestures). Similar to the first two studies, study 3 investigated turn-taking from a third person perspective by means of video stimuli. Therefore, the results allow only limited conclusions about the behaviour during everyday life.
  • 35. Empirical contribution 25 3 Empirical contribution 3.1 Synopsis of the studies Study 1: Comprehension of co-speech gestures in aphasic patients: An eye movement study Study 1, published in PLoS one, investigated the influence of co-speech gestures on comprehension and visual perception in aphasic patients. Twenty patients with aphasia after left-hemispheric stroke and 30 healthy control participants watched short video clips which depicted an actress who performed a co-speech gesture. During video presentation, the eye movements of the participants were recorded by means of a remote eye-tracking system, which was attached underneath the screen. In the main experiment, congruence between speech and co-speech gesture was manipulated under three experimental conditions: 1) congruent meaning of speech and gesture content (congruent condition); 2) incongruent meaning of speech and gesture content (incongruent condition); and 3) speech paired with a meaningless gesture (baseline condition). After each video clip comprehension was assessed by a force-choice decision task where participants had to indicate whether speech and gesture matched. The results show that co-speech gesture valence has a significant impact on comprehension in aphasic patients. As expected, aphasic patients had a lower performance in the baseline condition. In this condition, the co-speech gestures had no valence, because they represented meaningless hand movements. Therefore, the resulting difference between aphasic patients and healthy controls can be ascribed to speech processing deficits (e.g., language comprehension) which are typical for aphasic patients. In each group, the impact of co-speech gesture congruence was analysed in relation to the decision task performance in the baseline condition. Interestingly, task performance in aphasic patients decreased in the incongruent condition, while the congruent condition led to a significant increase in performance. In contrast to the patient group, performance in healthy controls was only modulated by the incongruent condition, where participants displayed a moderate decrease in task performance. Visual exploration analysis revealed
  • 36. Empirical contribution 26 that meaningless gestures attracted more attention than meaningful gestures, and incongruent gestures tended to attract more attention than congruent gestures. Furthermore, patients with aphasia, in comparison to healthy participants, fixated less frequently the face of the actress across all experimental conditions. Study 2: Perception of co-speech gestures in aphasic patients: A visual exploration study during the observation of dyadic conversations Study 2, published in Cortex, examined the influence of co-speech gestures on the visual exploration behaviour during video observation. Sixteen patients with aphasia and 23 healthy controls watched videos of naturalistic dyadic conversations while their eye movements were tracked by an infrared eye-tracking system. In this study, it was analysed how the distribution of visual fixations was modulated by the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Our results show that co-speech gestures in the video modulated gaze direction of the observer towards the speaking actor in the video. In particular, both aphasic patients and healthy controls fixated more the speaker’s hands if he or she was gesturing and less the listener’s face. We expected that aphasic patients would try to gain additional input from the articulatory movements of the speaker’s face or the co- speech gestures made by the actors in the video. Against our assumption we found that patients with aphasia neither fixated the speaker’s face, nor the gesturing hands more frequently than healthy controls. In contrast, our results show that, independent of co- speech gesture presence, aphasic patients fixate less frequently the speaker’s face. This altered visual exploration strategy may be the result of a deficit in processing audio-visual information. This deficit may cause aphasic patients to avoid interference between the visual and the auditory speech signal. Study 3: Eye gaze behaviour at turn transition: How aphasic patients process speakers’ turns during video observation Study 3, accepted for publication in the Journal of Cognitive Neuroscience, investigated the frequency and the latency of turn transition related gaze shifts in aphasic patients. Sixteen
  • 37. Empirical contribution 27 patients with aphasia and 23 healthy controls watched video vignettes of natural conversations, while their eye movements were measured. Study 3 is based on the same data sample that was documented in study 2. In study 3, data analysis focused on the frequency and the precise timing of eye movements in relation to the turn transitions between the speaking actors in the videos. In contrast to other studies documented in section 1.5.3, the timing of gaze shifts was analysed with respect to the end of the current turn, and not to the beginning of the next turn. We found that aphasic patient shifted their gaze less frequently at turn transitions. However, patients did not show significantly increased gaze shift latencies, compared to healthy controls. The probability whether a gaze shift would occur or not depended on the lexico-syntactic information provided before a particular turn transition. In healthy controls, higher lexico-syntactic complexity led to higher gaze shift probabilities. In contrast, decreasing gaze shift probability was associated with higher lexico-syntactic complexity in aphasic patients. The timing of gaze shifts depended on both the lexico-syntactic complexity and on the intonation variance. Healthy controls, but not aphasic patients, showed shorter gaze shift latencies when both intonation variance and lexico-syntactic complexity were increased. In addition, we found that brain lesions to the posterior branch of the left arcuate fasciculus predicted the impact of lexico-syntactic complexity on gaze shift latencies in aphasic patients.
  • 38. Empirical contribution 28 3.2 Original publications 3.2.1 Study 1 Published as: Eggenberger, N., Preisig, B. C., Hopfner, S., Vanbellingen, T., Schumacher, R., Nyffeler, T., . . . MĂźri, R. M. (2016). Comprehension of co-speech gestures in aphasic patients: an eye movement study. PloS one, 11(1). Retrieved from http://www.plosone.org
  • 39. RESEARCH ARTICLE Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study NoĂŤmi Eggenberger1 , Basil C. Preisig1 , Rahel Schumacher1,2 , Simone Hopfner1 , Tim Vanbellingen1,3 , Thomas Nyffeler1,3 , Klemens Gutbrod2 , Jean-Marie Annoni4 , Stephan Bohlhalter1,3 , Dario Cazzoli5 , RenĂŠ M. MĂźri1,2 * 1 Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, and University of Bern, Bern, Switzerland, 2 Division of Cognitive and Restorative Neurology, Department of Neurology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland, 3 Neurology and Neurorehabilitation Center, Department of Internal Medicine, Luzerner Kantonsspital, Luzern, Switzerland, 4 Neurology Unit, Laboratory for Cognitive and Neurological Sciences, Department of Medicine, Faculty of Science, University of Fribourg, Fribourg, Switzerland, 5 Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland * rene.mueri@insel.ch Abstract Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitat- ing language comprehension. However, it is unclear whether gestures also support lan- guage comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co- speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate compre- hension. Incongruent gestures evoke significant interference and deteriorate patients’ PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 1 / 19 OPEN ACCESS Citation: Eggenberger N, Preisig BC, Schumacher R, Hopfner S, Vanbellingen T, Nyffeler T, et al. (2016) Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study. PLoS ONE 11(1): e0146583. doi:10.1371/journal.pone.0146583 Editor: Antoni Rodriguez-Fornells, University of Barcelona, SPAIN Received: February 17, 2015 Accepted: December 18, 2015 Published: January 6, 2016 Copyright: Š 2016 Eggenberger et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability Statement: All relevant data are within the paper and its Supporting Information files. Funding: This study was entirely funded by the Swiss National Science Foundation (SNF). The grant (grant number 320030_138532/1) was received by RenĂŠ MĂźri (RM). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist.
  • 40. comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. Introduction Human communication consists of both verbal (speech) and nonverbal (facial expressions, hand gestures, body posture, etc.) elements. Gesturing is a crucial part of human nonverbal communi- cation and includes co-speech gestures—communicative movements of hands and arms that accompany concurrent speech [1–3]. After a left-hemispheric stroke, patients often develop aphasia, defined as the acquired loss or impairment of language [4]. Impairments in verbal ele- ments of language processing in aphasia are well known and extensively studied (e.g., [4, 5]). However, less is known about potential mechanisms and impairments in non-verbal aspects and, in particular, it is uncertain to what extent gesturing influences comprehension in aphasia. There is evidence that gesturing may be preserved in aphasic patients [6–8], either facilitat- ing speech processing (e.g., [9, 10]) or compensating for its impairment [6, 11]. This has led to the theoretical assumption that speech and gesturing depend on two independent cortical sys- tems [10, 12, 13]. However, other aphasic patients have considerable problems to produce or understand gestures [3, 14–17]. Further research on gesture processing in aphasia can contrib- ute to the ongoing debate of whether gesturing and speech rely on two independent cortical systems (with the implication that gestures could substitute or facilitate impaired speech), or whether they are organized in overlapping systems of language and action (e.g., [18–20]). Studying the perception of co-speech gestures in aphasia is thus relevant for two more reasons. First, aphasia can be considered as a disorder with supra-modal aspects [4]. Thus, it seems important to gain insights into the mechanisms leading to impairment of not only verbal aspects, but also of nonverbal ones, such as gesture perception and processing. Second, under- standing the role of gestures in language comprehension in aphasic patients is also of clinical relevance. Research in this field may lead to new therapeutic approaches, e.g., the development of compensatory strategies for impaired verbal communication in aphasic patients, for instance during the activities of daily living. Only few studies (e.g., [21, 22]) examined perception of co-speech gestures in aphasic patients. Previous research has mostly concentrated on comprehension of pantomime gestures (i.e. imita- tion of actions by means of gestures produced in the absence of speech). To the best of our knowledge, only two studies investigated speech and gesturing integration in aphasic patients. In one of these studies, Records [23] presented information either auditory (target word), visually (referential gesture towards target picture), or as a combination of both modalities (target word and referential gesture). Furthermore, the authors varied the level of ambiguity of the input. Aphasic patients had to indicate in a forced-choice task which picture had been described. The authors found that when auditory and visual information were ambiguous, aphasic patients relied more on the visually presented referential gesture [23]. More recently, in a single case study with a similar forced-choice paradigm, Cocks, Sautin, Kita, Morgan, and Zlotowitz [24] showed video vignettes of co-speech gestures to an aphasic patient and to a group of healthy con- trols. All participants were asked to select among four alternatives (including a verbal and a ges- tural match) the picture corresponding to the vignette they had watched. In order to solve the task, the aphasic patient relied primarily on gestural information. In contrast, healthy controls relied more on speech information [24]. The paradigm applied by Cocks and colleagues [24] allowed to assess another important aspect of co-speech gestures, namely the phenomenon of Comprehension of Co-Speech Gestures in Aphasic Patients PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 2 / 19
  • 41. multimodal gain. This phenomenon refers to the fact that the integration of two modalities (here gesturing and speech) leads to better performance than one of the two modalities alone, as often observed in healthy participants (e.g., [25–30]; for a review see [31]). Cocks et al.’s results showed that this integration phenomenon was impaired in their aphasic patient, who showed a lower multimodal gain than healthy controls [24]. However, due to the single case nature of the study, it remains unclear whether this impairment can be generalized to all aphasic patients. When studying speech and gesturing in aphasic patients, the frequent co-occurrence of limb apraxia (i.e., a higher cognitive impairment of motor control and conduction of move- ments [32, 33]), has to be taken into account. Lesions to left-hemispheric temporo-frontal areas often lead to both language impairment and apraxia (e.g., [15, 18, 34]). This co-occur- rence is due to the large overlap of the cortical representation of language, limb praxis, and higher-order motor control. It is assumed [32] that apraxia influences not only gesture produc- tion, but also gesture comprehension. The influence of apraxia on gesture comprehension has been investigated by several studies (e.g., [15, 35–38]), but yielded controversial results. Hals- band et al. [36] found impaired gesture imitation in apraxic patients, but no clear influence on gesture comprehension. In contrast, Pazzaglia et al. [35] reported a strong correlation between the performance in gesture imitation and gesture comprehension. The same group [38] found also gesture comprehension deficits in patients with limb apraxia. In a later study, they reported a specific deficit in gesture discrimination in a sample of patients with primary progressive aphasia [37]. Apraxia-related deficits may further complicate communicative attempts in aphasic patients [34]. In order to develop targeted speech-language therapy approaches, it may therefore be valuable to know which patients would benefit from additional, tailored gesture-based therapy. Eye movement tracking has grown in importance in the field of cognitive neuroscience over the last few decades. Eye-tracking is a highly suitable method to measure fixation behavior, and to assess visual perception and attention to gestures (e.g., fixations on a moving / gesturing hand) or to speech (e.g., fixations on a speaker’s lip movements) ([39]; for a review see also [40]). Eye-tracking techniques have been used for the study of gestures and speech-related behavior (e.g., [39, 41–43]). These investigations have shown that healthy participants spend as much as 90–95% of the fixation time on the speaker’s face in live conditions, and about 88% in video conditions. Only a minority of fixations is directed towards gestures [39, 42, 43]. Several factors are supposed to influence visual exploration behavior in healthy participants, such as the gestural amplitude and gestural holds throughout the execution of the gesture, the direction of the speaker’s own gaze, and differences in gestural categories [39, 42]. However, it is unclear whether aphasic patients display similar fixation patterns. To date, there do not appear to have been any studies investigating the visual exploration behavior during the observation of con- gruent or incongruent co-speech gestures. The present study aimed to investigate two main research questions in a sample of aphasic patients in comparison to healthy controls. First, we aimed to assess the influence of congru- ence between speech and co-speech gestures on the comprehension of speech and gestures in terms of accuracy in a decision task. Second, we were interested how the perception, i.e., the visual exploration behavior, is influenced by different levels of congruence. To assess these questions, we created an experiment comprising short video sequences with varying levels of congruence between speech and co-speech gestures. Each video consisted of a simple spoken sentence that was accompanied by a co-speech gesture. During the presentation of the videos, infrared-based eye-tracking was used to measure visual exploration on the hands and the face of the speaker. Three conditions of varying congruence were tested: a baseline con- dition (i.e., speech combined with a meaningless gesture), a congruent condition (i.e., speech and gesture having the same meaning), and an incongruent condition (i.e., speech combined Comprehension of Co-Speech Gestures in Aphasic Patients PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 3 / 19
  • 42. with a non-matching, but semantically meaningful, gesture). After the presentation of each video, the participants had to decide whether the spoken sentence was congruent with respect to the gesture (yes/no answer, forced-choice). Accuracy in the forced-choice task and visual exploration were assessed in a group of aphasic patients, and compared to those of a group of age- and gender-matched healthy participants, who underwent the same procedure. Concerning our first aim and in accordance with previous reports (e.g., [4, 44–48]), we assume that aphasic patients generally display specific language processing (i.e., comprehen- sion) deficits. We thus assume a priori that aphasic patients perform less accurately compared to healthy controls in the baseline condition, where meaningless gestural stimuli provide nei- ther additional information nor semantic interference. Our first hypothesis on the influence of congruence between speech and co-speech gestures is based on previous findings showing that co-speech gestures facilitate language comprehension in healthy participants, by providing additional or even redundant semantic information (e.g., [25–29]; for a review see [31]). We thus hypothesize that congruent co-speech gestures will have a facilitating effect on compre- hension, due to the presentation of additional congruent information. In contrast, incongruent gestures should result in reduced comprehension, due to the interference of the conflicting semantic contents of speech and co-speech gesture. Furthermore, we were interested in the role of apraxia. If apraxia plays an important role on comprehension of speech and co-speech gestures, then we expect that the comprehension in aphasic patients would not be influenced by different conditions of congruence, since the patients would have no additional gain of the co-speech gesture information. We thus hypothe- size that both aphasia and apraxia severity interfere with the comprehension of speech and ges- turing, however, this interference could be differentially strong depending on patients’ specific impairments as well as other cognitive deficits. In an additional control experiment, we tested comprehension of isolated gestures, evaluating the possibility that comprehension of gestures per se would be impaired. The second aim was to analyze visual exploration behavior during performance of the task and evaluate different exploration strategies between patients and healthy controls. We assume that both healthy controls and patients would fixate the face region the most, as shown by previous reports [39, 42, 43]. Due to the design of our study, where gestures play a prominent role, we hypothesize nevertheless a larger amount of fixations on the hands than previously reported. Furthermore, we hypothesize differences in visual exploration between aphasic patients and healthy controls: due to the impaired language comprehension in aphasia, patients may not use verbal information as efficiently as healthy controls. If aphasic patients rely more on nonverbal information, such as co-speech gestures, then they should look more at the ges- turing hands. This would result in increased fixation durations on the hands and decreased fix- ation durations on the face, compared to healthy controls. However, if apraxia has a stronger impact on visual exploration behavior than the language-related deficits (i.e., gestures become less comprehensible and less informative for aphasic patients with apraxia), then we may find decreased fixation durations on co-speech gestures and increased fixation durations on the face in comparison to healthy controls. Taken together, we were hypothesizing that aphasia and apraxia severity could differentially interfere with comprehension and the influence of congru- ence between speech and gesturing on such comprehension. Materials and Method Declaration of ethical approval All participants gave written informed consent prior to participation. Ethical approval to con- duct this study was provided by the Ethical Committee of the State of Bern. The study was Comprehension of Co-Speech Gestures in Aphasic Patients PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 4 / 19
  • 43. conducted in accordance with the principles of the latest version of the Declaration of Helsinki. The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details. 2.1 Participants Twenty patients with aphasia after a left-hemispheric stroke in cortical-subcortical regions (13 men, age: M = 56.7, SD = 13.5) and 30 age- and gender-matched healthy controls (14 men, age: M = 51.9, SD = 17.8) participated in the study. There was no significant difference between the two groups with respect to age (t(48) = 1.19; p = .23) or gender ratio (χ2(1) = 1.62; p = .25). All participants were right-handed. The native language of all participants was German. Aphasic patients were recruited from three different neurorehabilitation clinics in the German speaking part of Switzerland (University Hospital Bern, Kantonsspital Luzern, and Spitalzentrum Biel). At the time of examination, aphasic patients were in a sub-acute to chronic state (i.e., 1.5 to 55 months post stroke onset, M = 14.4, SD = 16.4). Aphasia diagnosis and classification was based on neurological examination and on standardized diagnostic language tests, administered by experienced speech-language therapists. Diagnostic measurements were carried out within two weeks of participation in the study. To assess aphasia severity and classify aphasia type, two subtests of the Aachener Aphasie Test (AAT, [49]) were carried out, i.e., the Token Test and the Written Language Test. The AAT is a standardized, well-established diagnostic aphasia test battery for German native speakers. Willmes, Poeck, Weniger and Huber [50] showed that the discriminative validity of the two selected subtests (i.e., Token Test and Written Language) is as good as the discriminative validity of the full test battery. In addition, the Test of Upper Limb Apraxia (TULIA, [51]) was administered to assess limb apraxia. The TULIA is a recently developed test, which consists of 48 items divided in two subscales (imitation of the experi- menter demonstrating a gesture, and pantomime upon verbal command, respectively) with 24 items each. Each subscale consists of 8 non-symbolic (meaningless), 8 intransitive (communi- cative), and 8 transitive (tool related) gestures. Rating is preferably performed by means of off- line video analysis, on a 6-point rating scale (0–5), resulting in a score range of 0–240. Offline video-based rating yields good to excellent internal consistency, as well as test-retest-reliability and construct validity [51]. Twelve out of the 20 aphasic patients were additionally diagnosed with apraxia according to the cut-off score defined by the TULIA test. Patients’ demographic and clinical data are summarized in Tables 1 and 2. All participants had normal or corrected- to-normal visual acuity and hearing, and no history of psychiatric disorders. Patients with complete hemianopia involving the fovea or right-sided visual neglect were excluded from the study. 2.2 Lesion Characteristics Lesion mapping was performed by a collaborator who was naĂŻve with respect to the patients’ test results and clinical presentation. An independent, second collaborator checked the accu- racy of the mapping. Lesion mapping was performed using the MRIcron software [52]. We used the same procedure as applied by Karnath et al. [53, 54]. Diffusion-weighted scans were selected for the analysis when MRI sequences were obtained within the first 48 h post-stroke. Magnetic resonance imaging (MRI) scans were available for 13 patients, and computed tomog- raphy (CT) scans were available for the remaining seven patients. For the available MRI scans, the boundary of the lesions was delineated directly on the individual MRI images for every sin- gle transversal slice. Both the scan and the lesion shape were then mapped into approximate Talairach space using the spatial normalization algorithm provided by SPM5 (http://www.fil. ion.ucl.ac.uk/spm/). For CT scans, lesions were mapped directly on the T1-weighted MNI Comprehension of Co-Speech Gestures in Aphasic Patients PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 5 / 19
  • 44. single subject template implemented in MRIcron [55] and visually controlled for different slice angles. The mean lesion volume was 56.7cm3 (SEM = 13.56cm3 ). Fig 1 shows the localisation and the degree of overlap of the brain lesions, transferred to the standard ch2 brain template implemented in MRICron ([55]). 2.3 Stimulus Material Three experimental conditions were implemented; each consisting of different short video sequences (Fig 2). In the first condition, the meaningless condition serving as a baseline, speech was simultaneously combined with meaningless gesturing (e.g., an actress saying “to open a bottle” and simultaneously putting her fingertips together). In the second condition, the con- gruent condition, sequences contained simultaneous speech and gesturing with matching con- tent (e.g., an actress saying “to rock a baby” and simultaneously mimicking the same action, i.e., joining her hands in front of her torso, with the arms forming an oval shape, as if holding a baby, and performing an oscillating movement with her hands and arms). In the third condi- tion, the incongruent condition, sequences contained simultaneous speech and gesturing with non-matching content (e.g., an actress saying “to brush your teeth” and simultaneously mim- icking the action of dialing a number on a phone, hence creating incongruence between speech and gesturing). Most of the videos (47 out of 75) depicted actual motor actions, while 28 videos were symbolic actions (e.g., saying “it was so delicious” while showing a thumbs-up gesture of approval). Each video sequence was followed by a forced-choice task, in which participants were prompted to decide by key press whether speech and gesturing were congruent or not. Congruent trials were correctly answered by pressing the “yes”-key, whereas both the meaning- less and the incongruent trials were correctly answered by pressing the “no”-key. We therefore Table 1. Overview of demographic and clinical data of aphasic patients and controls. Patients Controls n = 20 n = 30 Age Mean 56.7 51.9 (in years) Range 34–75 19.83 Gender Male 13 14 Female 7 16 Months post-onset Mean 14.4 SD 16.4 Number of errors in the Token Test Mean 18.6 (max. 50, cut-off > 7) SD 16.5 Range 0–50 Number of correct items in the Written Language Mean 56.2 (max. 90, cut-off < 81) SD 28.4 range 0–86 Number of correct items in the TULIA Mean 188.1 (max. 240, cut-off < 194) SD 21.5 range 141–221 Number of correct items in the TULIA Imitation Subscale Mean 94.7 (max. 120, cut-off < 95) SD 11.8 range 71–110 Notes. SD = Standard Deviation; Token Test: age-corrected error scores; Written Language: raw scores; TULIA = test of upper limb apraxia. doi:10.1371/journal.pone.0146583.t001 Comprehension of Co-Speech Gestures in Aphasic Patients PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 6 / 19
  • 45. Table 2. Detailed demographic and clinical data of aphasic patients. AAT TULIA Patient Number Gender Age Years of Education Etiology Lesion Location Months post- onset Presence of Hemiparesis Aphasic Syndrome Type Token TestScore Written LanguageScore OverallScore Imitation Subscale Score 1 M 61 14 isch L temp/ par 3.3 no amnestic 20 67 204 99 2 F 53 16 isch L temp/ par 4.5 no amnestic 0 n/a 221 106 3 M 74 15 isch L front/ temp 19.3 no Broca 0 80 201 97 4 F 51 12 isch L front/ temp 1.7 no Broca 18 41 171 97 5 F 40 17 hem L front/par 4.0 yes Broca 50 n/a 159 79 6 F 66 12 isch L temp 1.6 no amnestic 8 53 206 100 7 F 46 12 isch L front/ temp 41.6 no Broca 7 60 212 105 8 M 71 12 isch L temp/ par 2.0 no Broca 50 n/a 156 71 9 M 73 14 isch L temp 1.5 no Wernicke 17 81 168 81 10 F 40 17 isch L temp/ par 4.6 no amnestic 19 80 207 110 11 M 69 12 isch L temp 4.7 no Broca 0 70 216 104 12 F 36 17 hem L front/ temp/par 55.0 yes global 39 23 186 94 13 M 47 13 isch L front/ temp/par 36.0 no Broca 11 75 192 97 14 M 34 11 vasc L front/ temp/par 13.3 yes global 50 0 141 74 15 M 56 12 isch L temp/ par 37.5 no Broca 11 n/a 189 94 16 M 67 13 isch L temp/ par 30.0 no Wernicke 27 86 195 107 17 M 75 12 isch L temp 8.7 no Wernicke 0 60 168 80 18 M 62 14 isch L temp/ par 6.0 no Wernicke 6 69 188 91 19 M 70 12 hem L temp/ par 10.7 no Wernicke 7 67 192 100 20 M 42 12 isch bilateral 2.0 no Wernicke 13 79 189 108 Notes. L = left; Etiology: isch = ischaemic infarction in the territory of the medial cerebral artery, hem = hemorrhagic infarction (parenchymal hemorrhage), vasc = vasculitis; Lesion Location: front = frontal, par = parietal, temp = temporal; AAT = Aachener Aphasie Test, Token Test: age-corrected error scores; Written Language: raw scores; n/a: not applicable; TULIA = test of upper limb apraxia. doi:10.1371/journal.pone.0146583.t002 Comprehension of Co-Speech Gestures in Aphasic Patients PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 7 / 19
  • 46. Fig 1. Lesions maps of the 20 aphasic patients, plotted on axial slices oriented according to the radiological convention. Slices are depicted in 8mm descending steps. The Z position of each axial slice in the Talairach stereotaxic space is presented at the bottom of the figure. The number of patients with damage involving a specific region is color-coded according to the legend. doi:10.1371/journal.pone.0146583.g001 Fig 2. Examples of the video sequences used as stimuli, each consisting of simultaneous speech and gesturing. The sequences were either congruent (1), incongruent (2), or speech was combined with a meaningless gesture (3). doi:10.1371/journal.pone.0146583.g002 Comprehension of Co-Speech Gestures in Aphasic Patients PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 8 / 19
  • 47. decided to include more trials in the congruent condition. Out of the total 75 videos, 33 were congruent, 25 were incongruent, and 17 were meaningless. A list of the content of the original stimuli in German, as well as their English translation, can be found in S1 Appendix. 2.4 Apparatus and Eye-Tracking Eye movements were measured by means of a remote RED eye-tracking system (RED 250, Sen- soMotoric Instruments GmbH, Teltow, Germany), attached directly under the screen used for stimulus presentation. This infrared-based system allows the contactless measurement of the eye movements, of the number of visual fixations on specific regions of interest (ROIs), of the cumulative or mean fixation duration, and of the percentage gaze time on specific ROIs. A major advantage of the RED eye-tracking system is that fixation or stabilization of the head is not necessary, since the system is equipped with an automatic head-movement compensation mechanism (within a range of 40 x 20 cm, at approximately 70 cm viewing distance). The sys- tem was set at 60 Hz sampling rate (temporal resolution). 2.5 Procedure Participants were seated on a chair, at a distance varying between 60 and 80cm, facing the 22” computer screen where the videos were presented. A standard keyboard was placed in front of the participants at a comfortable distance. Participants were asked to carefully watch the video sequences and listen to the simultaneously presented speech. Moreover, they were instructed to decide, after each sequence, whether speech and gesturing had been congruent or incongru- ent. For this purpose, a static question slide appeared after each sequence. Participants had to enter their response by pressing one out of two keys on a standard keyboard within 6 seconds. The answer keys were color-coded, i.e., a green sticker indicating “yes” (covering the X-key of the keyboard), and a red sticker indicating “no” (covering the M-key of the keyboard). No additional verbal instruction was given. Three practice trials (one for each condition, i.e., con- gruent, incongruent, and baseline) were administered prior to the main experiment. During practice, feedback was given to the participants. Erroneous trials were explained and repeated to enhance task comprehension. In the main experiment, the 75 video sequences were presented in randomized order. Four short breaks were included in the design in order to avoid fatigue, resulting in five blocks of 15 random sequences each. Before each block, a 9-point calibration procedure was performed, in order to ensure accurate tracking of participants’ gaze. During calibration, participants were requested to fixate as accurately as possible 9 points, appearing sequentially and one at a time on the screen. The quality of the calibration was assessed by the experimenter, aiming for a gaze accuracy of 1° visual angle on the x- and y-coordinates or better. If this criterion was not met, the calibration procedure was repeated. To assess participants’ comprehension of isolated gestures, we performed an additional control experiment. The aim of this experiment was to exclude the possibility that gesture com- prehension per se was impaired, which in turn might have influenced comprehension in com- bined conditions (i.e., speech and gesturing). In this control experiment, participants were presented with a block of 15 video sequences in randomized order. The video sequences con- tained gestures without any verbal utterance. Participants were asked to carefully watch the gestures. After each video sequence, they were asked to indicate the meaning of the presented gesture by means of a forced-choice task. Three possible definitions of each gesture were presented, i.e. the correct definition, a semantic distractor, and a phonological distractor. Comprehension of Co-Speech Gestures in Aphasic Patients PLOS ONE | DOI:10.1371/journal.pone.0146583 January 6, 2016 9 / 19