Multimodality, Motion, Speech Production and Perception

Multimodality involves the exploration of language use and its representations in human communication, which always draws on the perception of multiple modes. It is not only the auditory channel transmitting acoustic signals, but also the visual channel which contributes to the expression and meaning of an utterance. Speakers use hand gestures and facial expressions to emphasize words or express their feelings and attitudes. We also communicate while on the move. Communication is, therefore, the output of dynamically interacting modes, incorporating acoustic, articulatory, gestural, bodily and respiratory signals. The interplay between these modes is the main focus of our research.

Within such an interdisciplinary approach, we use a large repertoire of methods for data analysis. These include e.g. acoustical analyses, perceptual tests, motion capture data and breathing recordings. Our investigations cover topics on a range of languages such as, for example, German, Polish, French and Turkish. In summary, our overall goal is to better understand the relationships between the modes which create the basis of communication and representation of language.

The topic is supported by an ANR-DFG grant to the SALAMMBO project Spoken language in motions (Fuchs), the PSIMS project (within XPrag.de): The pragmatic status of iconic meaning in spoken communication: Gestures, ideophones, prosodic modulations (Ebert, Fuchs, Krifka), the DFG project Audio-visual prosody of whispered and semi-whispered speech (Żygis), the german-polish collaboration project Grammatical tinnitus and its role in the perception of foreign language accent (Żygis), and the Marie Curie ITN grant (within Conversational brains: COBRA) Communicative alignment at the physiological level (Fuchs & Mooshammer).

Project publications