news

New SemDial publication

TTLab publishes its VR-based human–human directions dialogue corpus mediated by avatars.

Andy Lücking, Felix Voll, Daniel Rott, Alexander Henlein and Alexander Mehler. 2025. Head and Hand Movements During Turn Transitions: Data-Based Multimodal Analysis Using the Frankfurt VR Gesture–Speech Alignment Corpus (FraGA). Proceedings of the 29th Workshop on The Semantics and Pragmatics of Dialogue – Full Papers, 146–156.
BibTeX
@inproceedings{Luecking:Voll:Rott:Henlein:Mehler:2025-fraga,
  title     = {Head and Hand Movements During Turn Transitions: Data-Based Multimodal
               Analysis Using the {Frankfurt VR Gesture--Speech Alignment Corpus}
               ({FraGA})},
  author    = {Lücking, Andy and Voll, Felix and Rott, Daniel and Henlein, Alexander
               and Mehler, Alexander},
  year      = {2025},
  booktitle = {Proceedings of the 29th Workshop on The Semantics and Pragmatics
               of Dialogue -- Full Papers},
  series    = {SemDial'25 -- Bialogue},
  publisher = {SEMDIAL},
  url       = {http://semdial.org/anthology/Z25-Luecking_semdial_3316.pdf},
  pages     = {146--156}
}

ESSLLI advanced course on iconic gesture semantics!

The ViCom project GemDiS offers the ESSLLI 2025 advanced course on the semantics of iconic gestures. This is how the AI Mistral advertises the course:
“Unravel the mysteries of non-verbal communication in our upcoming advanced course on Spatial Gesture Semantics at ESSLLI 2025! Dive into the fascinating world of manual gestures and their interaction with speech meaning. Discover how spatial semantics and AI technologies are revolutionizing our understanding of gestures. Don’t miss out on this cutting-edge exploration from July 28 to August 1!”