Leibniz-Zentrum Allgemeine Sprachwissenschaft Leibniz-Gemeinschaft

Visual bodily signals as core devices for coordinating minds in interaction

Speaker Judith Holler
Affiliaton(s) Donders Institute for Brain, Cognition, & Behaviour (Radboud University, Nijmegen)
Date 08.12.2022, 14:00
Time 14:00 o'clock
Venue virtual

Abstract

Traditionally, visual bodily movements have been associated with the communication of affect and emotion. In the past decades, studies have convincingly demonstrated that some of these movements carry semantic information and contribute to the communication of propositional information. In this talk, I will throw light on the pragmatic contribution that visual bodily movements make in conversation. In doing so, I will focus on fundamental processes that are key in achieving mutual understanding in talk: signalling communicative intent, producing recipient-designed messages, signalling understanding, trouble in understanding and repairing problems in understanding, and the performance of social actions. The bodily semiotic resources that speakers use in engaging in these pragmatic processes include a wide range of bodily articulators, but in my talk I will focus on representational manual gestures and facial signals. Together, the results demonstrate that when we engage in conversational interaction our bodies act as core coordination devices.

About Judith Holler

Judith is Associate Professor and PI at the Donders Institute for Brain, Cognition, & Behaviour (Radboud University, Nijmegen) and leader of the research group Communication in Social Interaction also affiliated with the Max Planck Institute for Psycholinguistics. She is currently holder of an ERC Consolidator grant which funds the project ‘Communication in Action (CoAct). Prior to this, Judith Holler has been a Marie Curie Fellow and then a senior investigator at the Max Planck Institute for Psycholinguistics. The focus of her work is on the interplay of speech and visual bodily signals from the hands, head, face, and eye gaze, in communicating meaning in interaction. In her scientific approach, she combines analyses of natural language corpora with experimental testing, and methods from a wide range of fields, including gesture studies, linguistics, psycholinguistics, and neuroscience. In her most recent projects, she combines these methods also with cutting-edge tools and techniques, such as virtual reality, mobile eyetracking, and dual-EEG to further our insights into multimodal communication and coordination in social interaction.