
GFG Projects
Encoding Given/New Information in Speech and
Gesture |
Alexia Galati and Susan
E. Brennan |
A current debate in the study of
speech-accompanying gesture concerns the extent
to which speakers take addressees’ knowledge
into account while gesturing. Some researchers
propose that gestures are produced automatically
by speakers to facilitate lexical retrieval or
alleviate cognitive load (for-the-speaker view),
while others propose that gestures are produced
with the addressees’ needs in mind
(for-the-addressee view). In this study we try
to distinguish between the two views by
examining how speakers encode given and new
information. In 20 groups of 3, one person
retold the same Road Runner cartoon story twice
to one partner (so the second retelling was to
an Old Addressee) and once to another (New
Addressee), counterbalanced for hypermnesia
(Addressee1-Addressee1-Addressee2 or
Addressee1-Addressee2-Addressee1). We compared
the gesture space, iconic precision and
distribution of gesture types for a given
speaker across all retellings. Gestures produced
in stories retold to Old Addressees were smaller
and less precise than those retold to New
Addressees, although gestures were overall
attenuated over time. Converging findings come
from the events realized, words, and details
across retellings: speakers generally attenuated
their retellings more to Old Addressees than to
New Addressees. We conclude that given/new
attenuation takes place at various levels of
representation that may lend themselves
differently to adjustments intended for the
speaker or the addressee. Gesture production,
specifically, seems to be guided both by the
needs of addressees and by automatic processes
by which speakers do what is easier for
themselves. Overall, the partner-specific
adjustments we report in speech and gesture have
implications for understanding the alignment of
these modalities and extending the scope of
audience design to include nonverbal behavior.
|
ISGS 2007 talk |
|
Repeated Gestures Across Speakers |
Anna
Kuhlen and Mandana Seyfeddinipur |
Speakers in dialogue have been shown to converge
over time on the use of specific verbal
expressions. This repeated use of expressions
has been called lexical entrainment. A
comparable phenomenon can be found in gesture.
For example, interlocutors show a higher rate of
similar gestures when they can see each other (Kimbara,
2006). Also, watching mimicked speech and
gesture leads to higher production of mimicked
speech and gesture (Parrill & Kimbara, 2006). We
investigated whether gestural representations
persist over time and are passed on from speaker
to speaker.
Participants watched one of five video clips in
which a speaker describes a series of narrative
events. Clips varied whether speakers used
gestures and if so, what gesture form they used.
Subsequently, participants had to relate those
same events to an addressee.
We analyzed whether participants produced
gestures similar to the gestures the previous
speaker had produced for narrating the same
event. The results show that participants were
more likely to produce a certain gesture form
when they had seen the event described with that
same gesture form, indicating gestural
convergence. Implications for theories of
gesture production and mechanisms of mimicry and
entrainment are discussed.
|
ISGS 2007 poster |
|
Audience Design Effects in Interpretation |
Julie Weisenberg |
The use of oral gesture
during signing is the result of language
contact. This oral gesture, commonly referred to
as
mouthing is a voiceless visual
representation of words on a signer’s lips
produced concurrently with manual. It is
prevalent among English-dominant bilingual sign
language interpreters who use American Sign
Language (ASL) and spoken English when
interpreting for deaf consumers (Davis 1989;
Weisenberg 2003). These individuals have the
advantage of
simultaneity: the two channels of expression
are distinctly different: one, a visual-gestural
channel, the other oral-aural. Sign language
interpreters are highly concerned with their
deaf consumers’ level of comprehension when
organizing highly abstract English discourse
into a more concrete visual-spatial mode. They
often resort to borrowing directly from the
dominant language, English.
This
study tested audience design effects
during interpretation from spoken English to
ASL. When engaged in translating from spoken
English to American Sign Language (ASL),
interpreters shifted their style primarily to
accommodate their addressee.
A style shift was measured by the rate of
oral gesture. Based on an analysis of variance (ANOVA), F (1,3) = 11.11, p<.05, the
study demonstrates that the perceived cultural
identity of the audience has more of an effect
on oral gesture than non-audience factors such
as topic.
A pattern of oral gesture
reduction was also discovered. At least two
experimental contexts contained technical
terminology that was repeated. Often there was
no manual equivalent in ASL; therefore subjects
had to translate these terms by overlapping oral
gesture and a manual sign with approximate
meaning. Once the subjects had expressed the
combination a few times, the oral gesture was
reduced or removed completely.
Not only does this
study confirm what is a commonly held notion in
audience design, that
people are adjusting their
language in reaction to people, but also opens
up an inquiry to the use of the interpreting
context as a means of examining neologisms and
language variability.
|
|
Speakers’ adjustments to a distracted audience:
How speakers’ expectations and addressees’
feedback shape narrating and gesturing |
Anna Kuhlen and Alexia Galati |
Speakers make adjustments in response to their
addressees’ perceived needs both in their speech
and in their gestures. These adjustments are
motivated by information about addressees’ needs
established prior to the conversation or during
the conversation. For instance, addressees’
feedback during the conversation can lead speakers to make adjustments in
their narrations. When speakers interact with
distracted addressees who give them little
feedback, they narrate less vividly (Bavelas,
Coates, & Johnson, 2000) and gesture less
frequently (Jacobs & Garnham, 2007). Speakers’
expectations, established
prior
to the conversation, about addressees’ level of
engagement may motivate speakers to adjust their
utterances accordingly. In this study, we
consider how speakers’ expectations, in addition
to addressees’ feedback, shape speakers’
narrations and also speakers’ gestures.
In 39 dyads (32 men and 46 women),
speakers told addressees two jokes. Addressees
were either attentive or else distracted by a
second task, and speakers expected addressees to
be either attentive or distracted.
In two of the four
experimental conditions, therefore, speakers
held mistaken expectations about their
addressees’ behavior. The results of this study
are currently being interpreted.
|
|
|
The role of speech-gesture congruency and delay
in remembering action events |
Alexia Galati and Arthur G. Samuel |
A current debate concerns
whether people integrate information from
gestures with information from speech when
forming memory representations. Related to this
debate is whether extracting information from
gestures affects the longevity of memory
representations. This study addresses both how
well people remember events whose description is
accompanied by gesture and how well they
remember these events over time. Participants
watched videos of stories narrated by an actor
and were later prompted to reproduce target
events from each story. The stories were about
two minutes long and included three target
events, which differed in their congruency
between speech and gesture for a particular
action and were prompted after different lengths
of delay. With respect to speech-gesture
congruency, for each story, in one of the target
events speech and gesture for a particular
action were congruent with each other, in
another target event speech and gesture were
incongruent with each other, and in the
remaining target event the action was encoded in
speech but not in gesture. With respect to
delay, for each story, one target event was
prompted after a short delay (immediately after
the story), another after an intermediate delay
(after the next story), and the other after a
long delay (after four stories). We compare how
often participants realized the correct event
and mentioned the correct verb for the target
action, and how completely they reproduced the
propositional content of the event, depending on
speech-gesture congruency and delay. We also
compare how often participants produced
representational gestures for the target action
and how often these gestures were congruent with
the target verb. The data are currently being
analyzed.
|
|
|
|