What co-speech gestures do : investigating the communicative role of visual behaviour accompanying language use during reference in interaction

Language and gesture are thought to be tightly interrelated and co-expressive behaviours (McNeill, 1992; 2005) that, when used in communication, are often referred to as composite signals/utterances (Clark, 1996; Enfield, 2009). Linguistic research has typically focussed on the structure of language...

Full description

Bibliographic Details
Main Author: Wilson, Jack J.
Other Authors: Davies, Catherine ; Davies, Bethan
Published: University of Leeds 2016
Subjects:
419
Online Access:https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.715034
Description
Summary:Language and gesture are thought to be tightly interrelated and co-expressive behaviours (McNeill, 1992; 2005) that, when used in communication, are often referred to as composite signals/utterances (Clark, 1996; Enfield, 2009). Linguistic research has typically focussed on the structure of language, largely ignoring the effect gesture can have on the production and comprehension of utterances. In the linguistic literature, gesture is shoehorned into the communicative process rather than being an integral part of it (Wilson and Wharton, 2006; Wharton, 2009), which is at odds with the fact that gesture regularly plays a role that is directly connected to the semantic content of, in Gricean terms, “what is said” (Kendon, 2004; Grice, 1989). In order to explore these issues, this thesis investigates the effect of manual gestures on interaction at several different points during production and comprehension, based on the Clarkian Action Ladder (Clark, 1996). It focusses on the top two levels of the ladder: Level 3 signaling and recognising and level 4 proposing and considering. In doing so, it explores gesture’s local effect on how utterances are composed and comprehended, but also its more global effect on the interactional structure and the goals of the participants. This is achieved through two experiments. The first experiment, the map task, is an interactive spatial description task and the second is an eye-tracked visual world task. These two experiments explore how gestures are composed during the map task, how gestures affect the realtime comprehension of utterances, and how gestures are embedded within the turn-by-turn nature of talk. This thesis builds a picture of the effect of gesture at each stage of the comprehension process, demonstrating that gesture needs to be incorporated fully into pragmatic models of communication.