Research project within the DFG Priority Programme Visual Communication (01.01.2026-31.12.2028, https://vicom.info/).

ViCom investigates the special features and linguistic significance of visual communication. This comprises sign languages as fully developed natural languages which exclusively rely on the visual channel for communication, but also visual means that enhance spoken language such as gestures. It aims at disclosing the specific characteristics of the visual modality as a communication channel and its interaction with other channels (especially the auditory channel) to develop a comprehensive theoretical linguistic model of human communication and its cognitive foundations.
About CoSGrIn-VR
The CoSGrIn-VR project addresses the computational semantic mechanisms underlying speech-gesture integration, a research gap in formal semantics and multimodal communication studies. Little is known in computational semantics about how gestures acquire meaning and how gesture meaning combines with speech meaning. The project challenges holistic and gestalt-based models by developing a computational semantics of speech-gesture integration that is (1) formally modelled, (2) computationally implementable, (3) experimentally testable in virtual reality (VR), and (4) cognitively interpretable.
CoSGrIn-VR focuses on two key research areas: (i) developing computational semantic models for recognizing acting gestures and (ii) extending the GeMDiS-Model from the first phase to handle graded exemplification, addressing cases where gestures only partially or indirectly affiliate with speech. The first research area examines how perceptual action classifiers can account for acting gestures, which simulate actions. The second research area investigates the informational uncertainty in gesture-speech integration by studying quantified noun phrases and references to atypical objects.
Methodologically, the project employs Interaction Verification Games within VR environments to test its models. These immersive experiments capture multimodal data, including body movements, gestures, spatial behavior, and gaze. Additionally, CoSGrIn-VR aims to develop an AI-based VR lab to enhance experimental control using avatars and automate the annotation of multimodal communication. A key component of this approach is the Multi-perspective Annotation Model (MAM), which enables the systematic and largely automatic annotation of multimodal experimental data.
This integration of linguistic theory, computational modeling, and VR-based empirical testing positions CoSGrIn-VR as a novel approach to understanding the graded nature of speech-gesture integration and its implications for visual communication.
Publications and other activities
BibTeX
@misc{Luecking:Mehler:2026,
author = {Lücking, Andy and Mehler, Alexander},
title = {{Sprachbegleitende Gesten, KI und Virtuelle Realität}},
subtitle = {{Multimodale Kommunikationsforschung im Schnittfeld von Linguistik und Computerwissenschaft}},
howpublished = {Invited talk at DaFWEBKON26, Webkonferenz für
Deutschlehrende},
date = {2026-01-28/2026-01-30},
url = {https://dafwebkon.com/events/sprachbegleitende-gesten/},
keywords = {talk, cosgrin-vr},
note = {Invited talk},
abstract = {Alltagskommunikation ist üblicherweise multimodal (d.h., nutzt
mehr als einen Informationskanal). Gesprochene Sprache wird beispielsweise
von manuellen Gesten begleitet. Diese Gesten wiederum können über
die linguistische Bedeutung hinausgehende Information beitragen.
Sie sind also semantisch interessant.<br><br>Der Vortrag skizziert
eine räumliche Gestensemantik und führt in KI-gestützte Gestenklassifikation
ein. Um multimodale Verhaltensdaten zu erfassen und auszuwerten,
werden zunehmend Methoden der Virtuellen Realität (VR) eingesetzt.
Das Frankfurter Va.Si.Li-Lab kombiniert KI und VR für Multimodalitätsforschung.
Auf diese Weise lassen sich z.B. mutlimodal, avatarbasierte VR-Interaktionen
untersuchen und mit Face-to-face-Interaktionen vergleichen. Der
Vortrag stellt erste Ergebnisse vor.}
}
BibTeX
@misc{Luecking:2025-zif,
author = {Lücking, Andy},
keywords = {cosgrin-vr},
title = {Formal and Computational Iconic Gesture Semantics},
howpublished = {Invited talk at the ZiF Workshop \textit{Multimodal
Creativity}, Zentrum für interdisziplinäre
Forschung, Universität Bielefeld},
note = {Invited talk},
date = {2025-12-01/2025-12-02}
}
