# Dr. Andy Lücking

Dr. Andy Lücking

Staff member

Université de Paris
(heretofore Université Paris Diderot (Paris 7))
Laboratoire de Linguistique Formelle (LLF)
Bât. Olympe de Gouges, 5ème étage. 8, Rue Albert Einstein 75013 Paris
Postal address: 5 Rue Thomas Mann, 75013 Paris, Frankreich

Goethe-Universität Frankfurt am Main
Robert-Mayer-Straße 10
Faculty of Computer Science and Mathematics
Text Technology Lab, Room 401e
D-60325 Frankfurt am Main
D-60054 Frankfurt am Main (use for package delivery)

Phone: +49 69-798-24663

My research interests are centered around linguistic and philosophical theories of meaning and interaction. One focus is on the interplay of speech and gesture in communication. More recently, I also worked on an alternative account to generalised quantifiers. My favourite framework is Type Theory with Records (TTR). In my work, I usually combine theoretical modeling, use of digital resources and/or experimental methods. This includes the building of several corpora which are used in linguistics and digital humanities research. I am concerned with topics from multimodal communication such as semantic and pragmatic notions of reference, deferred reference, quantified noun phrases, alignment in dialogue, iconicity, depicting, demonstration and exemplification. I also investigate speech and gesture use in special or specialised dialogues such as educational learning and aphasia.

ORCID ID: Andy Lücking at ORCID

ResearchGate: Andy Lücking at RG

## Total: 62

### 2019 (7)

• A. Lücking, “Dialogue semantics: From cognitive structures to positive and negative learning,” in Frontiers and Advances in Positive Learning in the Age of InformaTiOn (PLATO), O. Zlatkin-Troitschankskaia, Ed., Cham, Switzerland: Springer Nature Switzerland AG, 2019, pp. 197-205.
[BibTeX]

@InCollection{Luecking:2019:a,
author =       {L\"{u}cking, Andy},
title =        {Dialogue semantics: {From} cognitive structures to
positive and negative learning},
year =         2019,
pages =        {197-205},
publisher =    {Springer Nature Switzerland AG},
editor =       {Zlatkin-Troitschankskaia, Olga},
booktitle =    {Frontiers and Advances in Positive Learning in the
Age of InformaTiOn (PLATO)},
doi =          {10.1007/978-3-030-26578-6},
url =
}
• A. Lücking and J. Ginzburg, “Not few but all quantifiers can be negated: towards a referentially transparent semantics of quantified noun phrases,” in Proceedings of the Amsterdam Colloquium 2019, 2019, pp. 269-278.
[BibTeX]

@InProceedings{Luecking:Ginzburg:2019,
author =       {L{\"u}cking, Andy and Ginzburg, Jonathan},
title =        {Not few but all quantifiers can be negated: towards
a referentially transparent semantics of quantified
noun phrases},
booktitle =    {Proceedings of the Amsterdam Colloquium 2019},
series =       {AC'19},
location =     {University of Amsterdam},
year =         2019,
pages =        {269-278},
url =          {http://events.illc.uva.nl/AC/AC2019/},
pdf =
}
• A. Lücking, “Gesture,” in Head-Driven Phrase Structure Grammar: The handbook, S. Müller, A. Abeillé, R. D. Borsley, and J. Koenig, Eds., Berlin: Language Science Press, 2019.
[BibTeX]

@InCollection{Luecking:2019:b,
keywords =     {own,bookchapter},
author+an =    {1=highlight},
author =       {L\"{u}cking, Andy},
year =         2019,
title =        {Gesture},
editor =       {M\"{u}ller, Stefan and Abeill\'{e}, Anne and
Borsley, Robert D. and Koenig, Jean-Pierre},
booktitle =    {{Head-Driven Phrase Structure Grammar}: {The}
handbook},
publisher =    {Language Science Press},
pdf =
{https://hpsg.hu-berlin.de/Projects/HPSG-handbook/PDFs/gesture.pdf},
url =          {https://langsci-press.org/catalog/book/259}
}
• A. Lücking, J. Ginzburg, and R. Cooper, “Grammar in dialogue,” in Head-Driven Phrase Structure Grammar: The handbook, S. Müller, A. Abeillé, R. D. Borsley, and J. Koenig, Eds., Berlin: Language Science Press, 2019.
[BibTeX]

@InCollection{Luecking:Ginzburg:Cooper:2019,
keywords =     {own,bookchapter},
author+an =    {1=highlight},
author =       {L\"{u}cking, Andy and Ginzburg, Jonathan and Cooper,
Robin},
year =         2019,
title =        {Grammar in dialogue},
editor =       {M\"{u}ller, Stefan and Abeill\'{e}, Anne and
Borsley, Robert D. and Koenig, Jean-Pierre},
booktitle =    {{Head-Driven Phrase Structure Grammar}: {The}
handbook},
publisher =    {Language Science Press},
pdf =
{https://hpsg.hu-berlin.de/Projects/HPSG-handbook/PDFs/dialogue.pdf},
url =          {https://langsci-press.org/catalog/book/259}
}
• A. Lücking, R. Cooper, S. Larsson, and J. Ginzburg, “Distribution is not enough — Going Firther,” in Proceedings of Natural Language and Computer Science, 2019.
[BibTeX]

@InProceedings{Luecking:Cooper:Larsson:Ginzburg:2019,
author =     {Lücking, Andy and Cooper, Robin and Larsson, Staffan and Ginzburg, Jonathan},
title =     {Distribution is not enough -- Going {Firther}},
booktitle =     {Proceedings of Natural Language and Computer Science},
maintitle =     {The 13th International Conference on Computational
Semantics (IWCS 2019)},
series =     {NLCS 6},
location =     {Gothenburg, Sweden},
month =     {May},
year =     2019,
}
• G. Abrami, A. Mehler, A. Lücking, E. Rieb, and P. Helfrich, “TextAnnotator: A flexible framework for semantic annotations,” in Proceedings of the Fifteenth Joint ACL – ISO Workshop on Interoperable Semantic Annotation, (ISA-15), 2019.
[Abstract] [BibTeX]

Modern annotation tools should meet at least the following general requirements: they can handle diverse data and annotation levels within one tool, and they support the annotation process with automatic (pre-)processing outcomes as much as possible. We developed a framework that meets these general requirements and that enables versatile and browser-based annotations of texts, the TextAnnotator. It combines NLP methods of pre-processing with methods of flexible post-processing. Infact, machine learning (ML) requires a lot of training and test data, but is usually far from achieving perfect results. Producing high-level annotations for ML and post-correcting its results are therefore necessary. This is the purpose of TextAnnotator, which is entirely implemented in ExtJS and provides a range of interactive visualizations of annotations. In addition, it allows for flexibly integrating knowledge resources, e.g. in the course of post-processing named entity recognition. The paper describes TextAnnotator’s architecture together with three use cases: annotating temporal structures, argument structures and named entity linking.
@InProceedings{Abrami:et:al:2019,
Author         = {Abrami, Giuseppe and Mehler, Alexander and Lücking, Andy and Rieb, Elias and Helfrich, Philipp},
Title          = {{TextAnnotator}: A flexible framework for semantic annotations},
BookTitle      = {Proceedings of the Fifteenth Joint ACL - ISO Workshop on Interoperable Semantic Annotation, (ISA-15)},
Series         = {ISA-15},
location       = {Gothenburg, Sweden},
month     = {May},
year           = 2019,
abstract   ="Modern annotation tools should meet at least the following general requirements: they can handle diverse data and annotation levels within one tool, and they support the annotation process with automatic (pre-)processing outcomes as much as possible. We developed a framework that meets these general requirements and that enables versatile and browser-based annotations of texts, the TextAnnotator. It combines NLP methods of pre-processing with methods of flexible post-processing. Infact, machine learning (ML) requires a lot of training and test data, but is usually far from achieving perfect results. Producing high-level annotations for ML and post-correcting its results are therefore necessary. This is the purpose of TextAnnotator, which is entirely implemented in ExtJS and provides a range of interactive visualizations of annotations. In addition, it allows for flexibly integrating knowledge resources, e.g. in the course of post-processing named entity recognition. The paper describes TextAnnotator’s architecture together with three use cases: annotating temporal structures, argument structures and named entity linking."
}
• R. Gleim, S. Eger, A. Mehler, T. Uslu, W. Hemati, A. Lücking, A. Henlein, S. Kahlsdorf, and A. Hoenen, “A practitioner’s view: a survey and comparison of lemmatization and morphological tagging in German and Latin,” Journal of Language Modeling, 2019.
[BibTeX]

@article{Gleim:Eger:Mehler:2019,
author    = {Gleim, R\"{u}diger and Eger, Steffen and Mehler, Alexander and Uslu, Tolga and Hemati, Wahed and L\"{u}cking, Andy and Henlein, Alexander and Kahlsdorf, Sven and Hoenen, Armin},
title     = {A practitioner's view: a survey and comparison of lemmatization and morphological tagging in German and Latin},
journal   = {Journal of Language Modeling},
year      = {2019},
doi = {10.15398/jlm.v7i1.205},
url = {http://jlm.ipipan.waw.pl/index.php/JLM/article/view/205}
}

### 2018 (6)

• A. Lücking, “Witness-loaded and Witness-free Demonstratives,” in Atypical Demonstratives, M. Coniglio, A. Murphy, E. Schlachter, and T. Veenstra, Eds., De Gruyter, 2018.
[BibTeX]

@InCollection{Luecking:2018:a,
author =     {Andy L\"{u}cking},
title =     {Witness-loaded and Witness-free Demonstratives},
booktitle =     {Atypical Demonstratives},
publisher =     {De Gruyter},
year =     2018,
editor =     {Marco Coniglio and Andrew Murphy and Eva Schlachter
and Tonjes Veenstra},
isbn =     {978-3-11-056029-9},
url={https://www.degruyter.com/view/product/495228},

}
• A. Lücking and J. Ginzburg, “Most people but not Bill’: integrating sets, individuals and negation into a cognitively plausible account of noun phrase interpretation,” in Proceedings of Cognitive Structures: Linguistic, Philosophical and Psychological Perspectives, 2018.
[BibTeX]

@InProceedings{Luecking:Ginzburg:2018,
title =        {Most people but not {Bill}': integrating sets,
individuals and negation into a cognitively
plausible account of noun phrase interpretation},
booktitle =    {Proceedings of Cognitive Structures: Linguistic,
Philosophical and Psychological Perspectives},
series =       {CoSt'18},
author =       {L\"{u}cking, Andy and Ginzburg, Jonathan},
year =         2018
}
• A. Mehler, W. Hemati, T. Uslu, and A. Lücking, “A Multidimensional Model of Syntactic Dependency Trees for Authorship Attribution,” in Quantitative analysis of dependency structures, J. Jiang and H. Liu, Eds., Berlin/New York: De Gruyter, 2018.
[Abstract] [BibTeX]

Abstract: In this chapter we introduce a multidimensional model of syntactic dependency trees. Our ultimate goal is to generate fingerprints of such trees to predict the author of the underlying sentences. The chapter makes a first attempt to create such fingerprints for sentence categorization via the detour of text categorization. We show that at text level, aggregated dependency structures actually provide information about authorship. At the same time, we show that this does not hold for topic detection. We evaluate our model using a quarter of a million sentences collected in two corpora: the first is sampled from literary texts, the second from Wikipedia articles. As a second finding of our approach, we show that quantitative models of dependency structure do not yet allow for detecting syntactic alignment in written communication. We conclude that this is mainly due to effects of lexical alignment on syntactic alignment.
@InCollection{Mehler:Hemati:Uslu:Luecking:2018,
Author         = {Alexander Mehler and Wahed Hemati and Tolga Uslu and
Andy Lücking},
Title          = {A Multidimensional Model of Syntactic Dependency Trees
BookTitle      = {Quantitative analysis of dependency structures},
Publisher      = {De Gruyter},
Editor         = {Jingyang Jiang and Haitao Liu},
abstract       = {Abstract: In this chapter we introduce a
multidimensional model of syntactic dependency trees.
Our ultimate goal is to generate fingerprints of such
trees to predict the author of the underlying
sentences. The chapter makes a first attempt to create
such fingerprints for sentence categorization via the
detour of text categorization. We show that at text
level, aggregated dependency structures actually
provide information about authorship. At the same time,
we show that this does not hold for topic detection. We
evaluate our model using a quarter of a million
sentences collected in two corpora: the first is
sampled from literary texts, the second from Wikipedia
articles. As a second finding of our approach, we show
that quantitative models of dependency structure do not
yet allow for detecting syntactic alignment in written
communication. We conclude that this is mainly due to
effects of lexical alignment on syntactic alignment.},
keywords       = {Dependency structure, Authorship attribution, Text
categorization, Syntactic Alignment},
year           = 2018
}
• A. Mehler, R. Gleim, A. Lücking, T. Uslu, and C. Stegbauer, “On the Self-similarity of Wikipedia Talks: a Combined Discourse-analytical and Quantitative Approach,” Glottometrics, vol. 40, pp. 1-44, 2018.
[BibTeX]

@Article{Mehler:Gleim:Luecking:Uslu:Stegbauer:2018,
Author         = {Alexander Mehler and Rüdiger Gleim and Andy Lücking
and Tolga Uslu and Christian Stegbauer},
Title          = {On the Self-similarity of {Wikipedia} Talks: a
Combined Discourse-analytical and Quantitative Approach},
Journal        = {Glottometrics},
Volume         = {40},
Pages          = {1-44},
year           = 2018
}
• P. Helfrich, E. Rieb, G. Abrami, A. Lücking, and A. Mehler, “TreeAnnotator: Versatile Visual Annotation of Hierarchical Text Relations,” in Proceedings of the 11th edition of the Language Resources and Evaluation Conference, May 7 – 12, Miyazaki, Japan, 2018.
[BibTeX]

@InProceedings{Helfrich:et:al:2018,
Author         = {Philipp Helfrich and Elias Rieb and Giuseppe Abrami
and Andy L{\"u}cking and Alexander Mehler},
Title          = {TreeAnnotator: Versatile Visual Annotation of
Hierarchical Text Relations},
BookTitle      = {Proceedings of the 11th edition of the Language
Resources and Evaluation Conference, May 7 - 12},
Series         = {LREC 2018},
year           = 2018
}
• A. Mehler, O. Zlatkin-Troitschanskaia, W. Hemati, D. Molerov, A. Lücking, and S. Schmidt, “Integrating Computational Linguistic Analysis of Multilingual Learning Data and Educational Measurement Approaches to Explore Learning in Higher Education,” in Positive Learning in the Age of Information: A Blessing or a Curse?, O. Zlatkin-Troitschanskaia, G. Wittum, and A. Dengel, Eds., Wiesbaden: Springer Fachmedien Wiesbaden, 2018, pp. 145-193.
[Abstract] [BibTeX]

This chapter develops a computational linguistic model for analyzing and comparing multilingual data as well as its application to a large body of standardized assessment data from higher education. The approach employs both an automatic and a manual annotation of the data on several linguistic layers (including parts of speech, text structure and content). Quantitative features of the textual data are explored that are related to both the students' (domain-specific knowledge) test results and their level of academic experience. The respective analysis involves statistics of distance correlation, text categorization with respect to text types (questions and response options) as well as languages (English and German), and network analysis to assess dependencies between features. The correlation between correct test results of students and linguistic features of the verbal presentations of tests indicate to what extent language influences higher education test performance. It has also been found that this influence relates to specialized language. Thus, this integrative modeling approach contributes a test basis for a large-scale analysis of learning data and points to a number of subsequent, more detailed research questions.
@inbook{Mehler:et:al:2018,
abstract = "This chapter develops a computational linguistic model for analyzing and comparing multilingual data as well as its application to a large body of standardized assessment data from higher education. The approach employs both an automatic and a manual annotation of the data on several linguistic layers (including parts of speech, text structure and content). Quantitative features of the textual data are explored that are related to both the students' (domain-specific knowledge) test results and their level of academic experience. The respective analysis involves statistics of distance correlation, text categorization with respect to text types (questions and response options) as well as languages (English and German), and network analysis to assess dependencies between features. The correlation between correct test results of students and linguistic features of the verbal presentations of tests indicate to what extent language influences higher education test performance. It has also been found that this influence relates to specialized language. Thus, this integrative modeling approach contributes a test basis for a large-scale analysis of learning data and points to a number of subsequent, more detailed research questions.",
author = "Mehler, Alexander and Zlatkin-Troitschanskaia, Olga and Hemati, Wahed and Molerov, Dimitri and L{\"u}cking, Andy and Schmidt, Susanne",
booktitle = "Positive Learning in the Age of Information: A Blessing or a Curse?",
doi = "10.1007/978-3-658-19567-0_10",
editor = "Zlatkin-Troitschanskaia, Olga and Wittum, Gabriel and Dengel, Andreas",
isbn = "978-3-658-19567-0",
pages = "145--193",
title = "Integrating Computational Linguistic Analysis of Multilingual Learning Data and Educational Measurement Approaches to Explore Learning in Higher Education",
url = "https://doi.org/10.1007/978-3-658-19567-0_10",
year = "2018"
}

### 2017 (2)

• A. Mehler and A. Lücking, “Modelle sozialer Netzwerke und Natural Language Processing: eine methodologische Randnotiz,” Soziologie, vol. 46, iss. 1, pp. 43-47, 2017.
[BibTeX]

@Article{Mehler:Luecking:2017,
Author         = {Alexander Mehler and Andy Lücking},
Title          = {Modelle sozialer Netzwerke und Natural Language
Processing: eine methodologische Randnotiz},
Journal        = {Soziologie},
Volume         = {46},
Number         = {1},
Pages          = {43-47},
year           = 2017
}
• A. Lücking, “Indexicals as Weak Descriptors,” in Proceedings of the 12th International Conference on Computational Semantics, Montpellier (France), 2017.
[BibTeX]

@InProceedings{Luecking:2017:c,
Author         = {L\"{u}cking, Andy},
Title          = {Indexicals as Weak Descriptors},
BookTitle      = {Proceedings of the 12th International Conference on
Computational Semantics},
Series         = {IWCS 2017},
year           = 2017
}

### 2016 (3)

• A. Lücking, “Modeling Co-Verbal Gesture Perception in Type Theory with Records,” in Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, Gdansk, Poland, 2016, pp. 383-392. Best Paper Award
[BibTeX]

@InProceedings{Luecking:2016:b,
Author         = {L\"{u}cking, Andy},
Title          = {Modeling Co-Verbal Gesture Perception in Type Theory
with Records},
BookTitle      = {Proceedings of the 2016 Federated Conference on
Computer Science and Information Systems},
Editor         = {M. Ganzha and L. Maciaszek and M. Paprzycki},
Volume         = {8},
Series         = {Annals of Computer Science and Information Systems},
Pages          = {383-392},
Publisher      = {IEEE},
Note           = {Best Paper Award},
doi            = {10.15439/2016F83},
pdf            = {http://annals-csis.org/Volume_8/pliks/83.pdf},
url            = {http://annals-csis.org/Volume_8/drp/83.html},
year           = 2016
}
• A. Lücking, A. Mehler, D. Walther, M. Mauri, and D. Kurfürst, “Finding Recurrent Features of Image Schema Gestures: the FIGURE corpus,” in Proceedings of the 10th International Conference on Language Resources and Evaluation, 2016.
[BibTeX]

@InProceedings{Luecking:Mehler:Walther:Mauri:Kurfuerst:2016,
Author         = {L\"{u}cking, Andy and Mehler, Alexander and Walther,
D\'{e}sir\'{e}e and Mauri, Marcel and Kurf\"{u}rst,
Dennis},
Title          = {Finding Recurrent Features of Image Schema Gestures:
the {FIGURE} corpus},
BookTitle      = {Proceedings of the 10th International Conference on
Language Resources and Evaluation},
Series         = {LREC 2016},
location       = {Portoro\v{z} (Slovenia)},
year           = 2016
}
• A. Lücking, A. Hoenen, and A. Mehler, “TGermaCorp — A (Digital) Humanities Resource for (Computational) Linguistics,” in Proceedings of the 10th International Conference on Language Resources and Evaluation, 2016.
[BibTeX]

@InProceedings{Luecking:Hoenen:Mehler:2016,
Author         = {L\"{u}cking, Andy and Hoenen, Armin and Mehler,
Alexander},
Title          = {{TGermaCorp} -- A (Digital) Humanities Resource for
(Computational) Linguistics},
BookTitle      = {Proceedings of the 10th International Conference on
Language Resources and Evaluation},
Series         = {LREC 2016},
islrn          = {536-382-801-278-5},
location       = {Portoro\v{z} (Slovenia)},
year           = 2016
}

### 2015 (1)

• A. Lücking, T. Pfeiffer, and H. Rieser, “Pointing and Reference Reconsidered,” Journal of Pragmatics, vol. 77, pp. 56-79, 2015.
[Abstract] [BibTeX]

Current semantic theory on indexical expressions claims that demonstratively used indexicals such as this lack a referent-determining meaning but instead rely on an accompanying demonstration act like a pointing gesture. While this view allows to set up a sound logic of demonstratives, the direct-referential role assigned to pointing gestures has never been scrutinized thoroughly in semantics or pragmatics. We investigate the semantics and pragmatics of co-verbal pointing from a foundational perspective combining experiments, statistical investigation, computer simulation and theoretical modeling techniques in a novel manner. We evaluate various referential hypotheses with a corpus of object identification games set up in experiments in which body movement tracking techniques have been extensively used to generate precise pointing measurements. Statistical investigation and computer simulations show that especially distal areas in the pointing domain falsify the semantic direct-referential hypotheses concerning pointing gestures. As an alternative, we propose that reference involving pointing rests on a default inference which we specify using the empirical data. These results raise numerous problems for classical semantics–pragmatics interfaces: we argue for pre-semantic pragmatics in order to account for inferential reference in addition to classical post-semantic Gricean pragmatics.
@Article{Luecking:Pfeiffer:Rieser:2015,
Author         = {Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes},
Title          = {Pointing and Reference Reconsidered},
Journal        = {Journal of Pragmatics},
Volume         = {77},
Pages          = {56-79},
abstract       = {Current semantic theory on indexical expressions
claims that demonstratively used indexicals such as
this lack a referent-determining meaning but instead
rely on an accompanying demonstration act like a
pointing gesture. While this view allows to set up a
sound logic of demonstratives, the direct-referential
role assigned to pointing gestures has never been
scrutinized thoroughly in semantics or pragmatics. We
investigate the semantics and pragmatics of co-verbal
pointing from a foundational perspective combining
experiments, statistical investigation, computer
simulation and theoretical modeling techniques in a
novel manner. We evaluate various referential
hypotheses with a corpus of object identification games
set up in experiments in which body movement tracking
techniques have been extensively used to generate
precise pointing measurements. Statistical
investigation and computer simulations show that
especially distal areas in the pointing domain falsify
the semantic direct-referential hypotheses concerning
pointing gestures. As an alternative, we propose that
reference involving pointing rests on a default
inference which we specify using the empirical data.
These results raise numerous problems for classical
semantics–pragmatics interfaces: we argue for
pre-semantic pragmatics in order to account for
inferential reference in addition to classical
post-semantic Gricean pragmatics.},
doi            = {10.1016/j.pragma.2014.12.013},
website        = {http://www.sciencedirect.com/science/article/pii/S037821661500003X},
year           = 2015
}

### 2014 (2)

• A. Mehler, T. vor der Brück, and A. Lücking, “Comparing Hand Gesture Vocabularies for HCI,” in Proceedings of HCI International 2014, 22 – 27 June 2014, Heraklion, Greece, Berlin/New York: Springer, 2014.
[Abstract] [BibTeX]

HCI systems are often equipped with gestural interfaces drawing on a predefined set of admitted gestures. We provide an assessment of the fitness of such gesture vocabularies in terms of their learnability and naturalness. This is done by example of rivaling gesture vocabularies of the museum information system WikiNect. In this way, we do not only provide a procedure for evaluating gesture vocabularies, but additionally contribute to design criteria to be followed by the gestures.
@InCollection{Mehler:vor:der:Brueck:Luecking:2014,
Author         = {Mehler, Alexander and vor der Brück, Tim and
Lücking, Andy},
Title          = {Comparing Hand Gesture Vocabularies for HCI},
BookTitle      = {Proceedings of HCI International 2014, 22 - 27 June
2014, Heraklion, Greece},
Publisher      = {Springer},
abstract       = {HCI systems are often equipped with gestural
interfaces drawing on a predefined set of admitted
gestures. We provide an assessment of the fitness of
such gesture vocabularies in terms of their
learnability and naturalness. This is done by example
of rivaling gesture vocabularies of the museum
information system WikiNect. In this way, we do not
only provide a procedure for evaluating gesture
vocabularies, but additionally contribute to design
criteria to be followed by the gestures.},
keywords       = {wikinect},
year           = 2014
}
• A. Mehler, A. Lücking, and G. Abrami, “WikiNect: Image Schemata as a Basis of Gestural Writing for Kinetic Museum Wikis,” Universal Access in the Information Society, pp. 1-17, 2014.
[Abstract] [BibTeX]

This paper provides a theoretical assessment of gestures in the context of authoring image-related hypertexts by example of the museum information system WikiNect. To this end, a first implementation of gestural writing based on image schemata is provided (Lakoff in Women, fire, and dangerous things: what categories reveal about the mind. University of Chicago Press, Chicago, 1987). Gestural writing is defined as a sort of coding in which propositions are only expressed by means of gestures. In this respect, it is shown that image schemata allow for bridging between natural language predicates and gestural manifestations. Further, it is demonstrated that gestural writing primarily focuses on the perceptual level of image descriptions (Hollink et al. in Int J Hum Comput Stud 61(5):601–626, 2004). By exploring the metaphorical potential of image schemata, it is finally illustrated how to extend the expressiveness of gestural writing in order to reach the conceptual level of image descriptions. In this context, the paper paves the way for implementing museum information systems like WikiNect as systems of kinetic hypertext authoring based on full-fledged gestural writing.
@Article{Mehler:Luecking:Abrami:2014,
Author         = {Mehler, Alexander and Lücking, Andy and Abrami,
Giuseppe},
Title          = {{WikiNect}: Image Schemata as a Basis of Gestural
Writing for Kinetic Museum Wikis},
Journal        = {Universal Access in the Information Society},
Pages          = {1-17},
abstract       = {This paper provides a theoretical assessment of
gestures in the context of authoring image-related
hypertexts by example of the museum information system
WikiNect. To this end, a first implementation of
gestural writing based on image schemata is provided
(Lakoff in Women, fire, and dangerous things: what
categories reveal about the mind. University of Chicago
Press, Chicago, 1987). Gestural writing is defined as a
sort of coding in which propositions are only expressed
by means of gestures. In this respect, it is shown that
image schemata allow for bridging between natural
language predicates and gestural manifestations.
Further, it is demonstrated that gestural writing
primarily focuses on the perceptual level of image
descriptions (Hollink et al. in Int J Hum Comput Stud
61(5):601–626, 2004). By exploring the metaphorical
potential of image schemata, it is finally illustrated
how to extend the expressiveness of gestural writing in
order to reach the conceptual level of image
descriptions. In this context, the paper paves the way
for implementing museum information systems like
WikiNect as systems of kinetic hypertext authoring
based on full-fledged gestural writing.},
doi            = {10.1007/s10209-014-0386-8},
issn           = {1615-5289},
keywords       = {wikinect},
website        = {http://dx.doi.org/10.1007/s10209-014-0386-8},
year           = 2014
}

### 2013 (8)

• A. Mehler, A. Lücking, T. vor der Brück, and G. Abrami, WikiNect – A Kinetic Artwork Wiki for Exhibition Visitors, 2013.
[Poster][BibTeX]

@Misc{Mehler:Luecking:vor:der:Brueck:2013:a,
Author         = {Mehler, Alexander and Lücking, Andy and vor der
Brück, Tim and Abrami, Giuseppe},
Title          = {WikiNect - A Kinetic Artwork Wiki for Exhibition
Visitors},
HowPublished   = {Poster Presentation at the Scientific Computing and
Cultural Heritage 2013 Conference, Heidelberg},
keywords       = {wikinect},
month          = {11},
url            = {http://scch2013.wordpress.com/},
year           = 2013
}
• A. Lücking, Theoretische Bausteine für einen semiotischen Ansatz zum Einsatz von Gestik in der Aphasietherapie, 2013.
[BibTeX]

@Misc{Luecking:2013:c,
Author         = {Lücking, Andy},
Title          = {Theoretische Bausteine für einen semiotischen Ansatz
zum Einsatz von Gestik in der Aphasietherapie},
HowPublished   = {Talk at the BKL workshop 2013, Bochum},
month          = {05},
url            = {http://www.bkl-ev.de/bkl_workshop/archiv/workshop13_programm.php},
year           = 2013
}
• A. Lücking, Eclectic Semantics for Non-Verbal Signs, 2013.
[BibTeX]

@Misc{Luecking:2013:d,
Author         = {Lücking, Andy},
Title          = {Eclectic Semantics for Non-Verbal Signs},
HowPublished   = {Talk at the Conference on Investigating semantics:
Empirical and philosophical approaches, Bochum},
month          = {10},
url            = {http://www.ruhr-uni-bochum.de/phil-lang/investigating/index.html},
year           = 2013
}
• A. Lücking, “Multimodal Propositions? From Semiotic to Semantic Considerations in the Case of Gestural Deictics,” in Poster Abstracts of the Proceedings of the 17th Workshop on the Semantics and Pragmatics of Dialogue, Amsterdam, 2013, pp. 221-223.
[Poster][BibTeX]

@InProceedings{Luecking:2013:e,
Author         = {Lücking, Andy},
Title          = {Multimodal Propositions? From Semiotic to Semantic
Considerations in the Case of Gestural Deictics},
BookTitle      = {Poster Abstracts of the Proceedings of the 17th
Workshop on the Semantics and Pragmatics of Dialogue},
Editor         = {Fernandez, Raquel and Isard, Amy},
Series         = {SemDial 2013},
Pages          = {221-223},
month          = {12},
year           = 2013
}
• A. Lücking and A. Mehler, “On Three Notions of Grounding of Artificial Dialog Companions,” Science, Technology & Innovation Studies, vol. 10, iss. 1, pp. 31-36, 2013.
[Abstract] [BibTeX]

We provide a new, theoretically motivated evaluation                    grid for assessing the conversational achievements of                    Artificial Dialog Companions (ADCs). The grid is                    spanned along three grounding problems. Firstly, it is                    argued that symbol grounding in general has to be                    instrinsic. Current approaches in this context,                    however, are limited to a certain kind of expression                    that can be grounded in this way. Secondly, we identify                    three requirements for conversational grounding, the                    process leading to mutual understanding. Finally, we                    sketch a test case for symbol grounding in the form of                    the philosophical grounding problem that involves the                    use of modal language. Together, the three grounding                    problems provide a grid that allows us to assess                    ADCs’ dialogical performances and to pinpoint future                    developments on these grounds.
@Article{Luecking:Mehler:2013:a,
Author         = {Lücking, Andy and Mehler, Alexander},
Title          = {On Three Notions of Grounding of Artificial Dialog
Companions},
Journal        = {Science, Technology \& Innovation Studies},
Volume         = {10},
Number         = {1},
Pages          = {31-36},
abstract       = {We provide a new, theoretically motivated evaluation
grid for assessing the conversational achievements of
Artificial Dialog Companions (ADCs). The grid is
spanned along three grounding problems. Firstly, it is
argued that symbol grounding in general has to be
instrinsic. Current approaches in this context,
however, are limited to a certain kind of expression
that can be grounded in this way. Secondly, we identify
three requirements for conversational grounding, the
process leading to mutual understanding. Finally, we
sketch a test case for symbol grounding in the form of
the philosophical grounding problem that involves the
use of modal language. Together, the three grounding
problems provide a grid that allows us to assess
ADCs’ dialogical performances and to pinpoint future
developments on these grounds.},
website        = {http://www.sti-studies.de/ojs/index.php/sti/article/view/143},
year           = 2013
}
• A. Lücking, “Interfacing Speech and Co-Verbal Gesture: Exemplification,” in Proceedings of the 35th Annual Conference of the German Linguistic Society, Potsdam, Germany, 2013, pp. 284-286.
[BibTeX]

@InProceedings{Luecking:2013:b,
Author         = {Lücking, Andy},
Title          = {Interfacing Speech and Co-Verbal Gesture:
Exemplification},
BookTitle      = {Proceedings of the 35th Annual Conference of the
German Linguistic Society},
Series         = {DGfS 2013},
Pages          = {284-286},
year           = 2013
}
• A. Lücking, Ikonische Gesten. Grundzüge einer linguistischen Theorie, Berlin and Boston: De Gruyter, 2013. Zugl. Diss. Univ. Bielefeld (2011)
[Abstract] [BibTeX]

Nicht-verbale Zeichen, insbesondere sprachbegleitende                    Gesten, spielen eine herausragende Rolle in der                    menschlichen Kommunikation. Um eine Analyse von Gestik                    innerhalb derjenigen Disziplinen, die sich mit der                    Erforschung und Modellierung von Dialogen                    beschäftigen, zu ermöglichen, bedarf es einer                    entsprechenden linguistischen Rahmentheorie.                    „Ikonische Gesten“ bietet einen ersten zeichen- und                    wahrnehmungstheoretisch motivierten Rahmen an, in dem                    eine grammatische Analyse der Integration von Sprache                    und Gestik möglich ist. Ausgehend von einem Abriss                    semiotischer Zugänge zu ikonischen Zeichen wird der                    vorherrschende Ähnlichkeitsansatz unter Rückgriff                    auf Wahrnehmungstheorien zugunsten eines                    Exemplifikationsansatzes verworfen. Exemplifikation                    wird im Rahmen einer unifikationsbasierten Grammatik                    umgesetzt. Dort werden u.a. multimodale                    Wohlgeformtheit, Synchronie und multimodale                    Subkategorisierung als neue Gegenstände                    linguistischer Forschung eingeführt und im Rahmen                    einer integrativen Analyse von Sprache und Gestik                    modelliert.
@Book{Luecking:2013,
Author         = {Lücking, Andy},
Title          = {Ikonische Gesten. Grundzüge einer linguistischen
Theorie},
Publisher      = {De Gruyter},
Note           = {Zugl. Diss. Univ. Bielefeld (2011)},
abstract       = {Nicht-verbale Zeichen, insbesondere sprachbegleitende
Gesten, spielen eine herausragende Rolle in der
menschlichen Kommunikation. Um eine Analyse von Gestik
innerhalb derjenigen Disziplinen, die sich mit der
Erforschung und Modellierung von Dialogen
besch{\"a}ftigen, zu ermöglichen, bedarf es einer
entsprechenden linguistischen Rahmentheorie.
„Ikonische Gesten“ bietet einen ersten zeichen- und
wahrnehmungstheoretisch motivierten Rahmen an, in dem
eine grammatische Analyse der Integration von Sprache
und Gestik möglich ist. Ausgehend von einem Abriss
semiotischer Zug{\"a}nge zu ikonischen Zeichen wird der
vorherrschende {\"A}hnlichkeitsansatz unter Rückgriff
auf Wahrnehmungstheorien zugunsten eines
Exemplifikationsansatzes verworfen. Exemplifikation
wird im Rahmen einer unifikationsbasierten Grammatik
umgesetzt. Dort werden u.a. multimodale
Wohlgeformtheit, Synchronie und multimodale
Subkategorisierung als neue Gegenst{\"a}nde
linguistischer Forschung eingeführt und im Rahmen
einer integrativen Analyse von Sprache und Gestik
modelliert.},
year           = 2013
}
• A. Lücking, K. Bergman, F. Hahn, S. Kopp, and H. Rieser, “Data-based Analysis of Speech and Gesture: The Bielefeld Speech and Gesture Alignment Corpus (SaGA) and its Applications,” Journal of Multimodal User Interfaces, vol. 7, iss. 1-2, pp. 5-18, 2013.
[Abstract] [BibTeX]

Communicating face-to-face, interlocutors frequently                    produce multimodal meaning packages consisting of                    speech and accompanying gestures. We discuss a                    systematically annotated speech and gesture corpus                    consisting of 25 route-and-landmark-description                    dialogues, the Bielefeld Speech and Gesture Alignment                    corpus (SaGA), collected in experimental face-to-face                    settings. We first describe the primary and secondary                    data of the corpus and its reliability assessment. Then                    we go into some of the projects carried out using SaGA                    demonstrating the wide range of its usability: on the                    empirical side, there is work on gesture typology,                    individual and contextual parameters influencing                    gesture production and gestures’ functions for                    dialogue structure. Speech-gesture interfaces have been                    established extending unification-based grammars. In                    addition, the development of a computational model of                    speech-gesture alignment and its implementation                    constitutes a research line we focus on.
@Article{Luecking:Bergmann:Hahn:Kopp:Rieser:2012,
Author         = {Lücking, Andy and Bergman, Kirsten and Hahn, Florian
and Kopp, Stefan and Rieser, Hannes},
Title          = {Data-based Analysis of Speech and Gesture: The
Bielefeld Speech and Gesture Alignment Corpus (SaGA)
and its Applications},
Journal        = {Journal of Multimodal User Interfaces},
Volume         = {7},
Number         = {1-2},
Pages          = {5-18},
abstract       = {Communicating face-to-face, interlocutors frequently
produce multimodal meaning packages consisting of
speech and accompanying gestures. We discuss a
systematically annotated speech and gesture corpus
consisting of 25 route-and-landmark-description
dialogues, the Bielefeld Speech and Gesture Alignment
corpus (SaGA), collected in experimental face-to-face
settings. We first describe the primary and secondary
data of the corpus and its reliability assessment. Then
we go into some of the projects carried out using SaGA
demonstrating the wide range of its usability: on the
empirical side, there is work on gesture typology,
individual and contextual parameters influencing
gesture production and gestures’ functions for
dialogue structure. Speech-gesture interfaces have been
established extending unification-based grammars. In
addition, the development of a computational model of
speech-gesture alignment and its implementation
constitutes a research line we focus on.},
doi            = {10.1007/s12193-012-0106-8},
year           = 2013
}

### 2012 (8)

• A. Mehler and A. Lücking, “Pathways of Alignment between Gesture and Speech: Assessing Information Transmission in Multimodal Ensembles,” in Proceedings of the International Workshop on Formal and Computational Approaches to Multimodal Communication under the auspices of ESSLLI 2012, Opole, Poland, 6-10 August, 2012.
[Abstract] [BibTeX]

We present an empirical account of multimodal                    ensembles based on Hjelmslev’s notion of selection.                    This is done to get measurable evidence for the                    existence of speech-and-gesture ensembles. Utilizing                    information theory, we show that there is an                    information transmission that makes a gestures’                    representation technique predictable when merely                    knowing its lexical affiliate – in line with the                    notion of the primacy of language. Thus, there is                    evidence for a one-way coupling – going from words to                    gestures – that leads to speech-and-gesture alignment                    and underlies the constitution of multimodal ensembles.
@InProceedings{Mehler:Luecking:2012:d,
Author         = {Mehler, Alexander and Lücking, Andy},
Title          = {Pathways of Alignment between Gesture and Speech:
Assessing Information Transmission in Multimodal
Ensembles},
BookTitle      = {Proceedings of the International Workshop on Formal
and Computational Approaches to Multimodal
Communication under the auspices of ESSLLI 2012, Opole,
Poland, 6-10 August},
Editor         = {Gianluca Giorgolo and Katya Alahverdzhieva},
abstract       = {We present an empirical account of multimodal
ensembles based on Hjelmslev’s notion of selection.
This is done to get measurable evidence for the
existence of speech-and-gesture ensembles. Utilizing
information theory, we show that there is an
information transmission that makes a gestures’
representation technique predictable when merely
knowing its lexical affiliate – in line with the
notion of the primacy of language. Thus, there is
evidence for a one-way coupling – going from words to
gestures – that leads to speech-and-gesture alignment
and underlies the constitution of multimodal ensembles.},
keywords       = {wikinect},
website        = {http://www.researchgate.net/publication/268368670_Pathways_of_Alignment_between_Gesture_and_Speech_Assessing_Information_Transmission_in_Multimodal_Ensembles},
year           = 2012
}
• A. Lücking, “Towards a Conceptual, Unification-based Speech-Gesture Interface,” in Proceedings of the International Workshop on Formal and Computational Approaches to Multimodal Communication under the auspices of ESSLLI 2012, Opole, Poland, 6-10 August, 2012.
[Abstract] [BibTeX]

A framework for grounding the semantics of co-verbal                    iconic gestures is presented. A resemblance account to                    iconicity is discarded in favor of an exemplification                    approach. It is sketched how exemplification can be                    captured within a unification-based grammar that                    provides a conceptual interface. Gestures modeled as                    vector sequences are the exemplificational base. Some                    hypotheses that follow from the general account are                    pointed at and remaining challenges are discussed.
@InProceedings{Luecking:2012,
Author         = {Lücking, Andy},
Title          = {Towards a Conceptual, Unification-based Speech-Gesture
Interface},
BookTitle      = {Proceedings of the International Workshop on Formal
and Computational Approaches to Multimodal
Communication under the auspices of ESSLLI 2012, Opole,
Poland, 6-10 August},
Editor         = {Gianluca Giorgolo and Katya Alahverdzhieva},
abstract       = {A framework for grounding the semantics of co-verbal
iconic gestures is presented. A resemblance account to
iconicity is discarded in favor of an exemplification
approach. It is sketched how exemplification can be
captured within a unification-based grammar that
provides a conceptual interface. Gestures modeled as
vector sequences are the exemplificational base. Some
hypotheses that follow from the general account are
pointed at and remaining challenges are discussed.},
year           = 2012
}
• A. Mehler and A. Lücking, “WikiNect: Towards a Gestural Writing System for Kinetic Museum Wikis,” in Proceedings of the International Workshop On User Experience in e-Learning and Augmented Technologies in Education (UXeLATE 2012) in Conjunction with ACM Multimedia 2012, 29 October- 2 November, Nara, Japan, 2012, pp. 7-12.
[Abstract] [BibTeX]

We introduce WikiNect as a kinetic museum information                    system that allows museum visitors to give on-site                    feedback about exhibitions. To this end, WikiNect                    integrates three approaches to Human-Computer                    Interaction (HCI): games with a purpose, wiki-based                    collaborative writing and kinetic text-technologies.                    Our aim is to develop kinetic technologies as a new                    paradigm of HCI. They dispense with classical                    interfaces (e.g., keyboards) in that they build on                    non-contact modes of communication like gestures or                    facial expressions as input displays. In this paper, we                    introduce the notion of gestural writing as a kinetic                    text-technology that underlies WikiNect to enable                    museum visitors to communicate their feedback. The                    basic idea is to explore sequences of gestures that                    share the semantic expressivity of verbally manifested                    speech acts. Our task is to identify such gestures that                    are learnable on-site in the usage scenario of                    WikiNect. This is done by referring to so-called                    transient gestures as part of multimodal ensembles,                    which are candidate gestures of the desired                    functionality. 
@InProceedings{Mehler:Luecking:2012:c,
Author         = {Mehler, Alexander and Lücking, Andy},
Title          = {WikiNect: Towards a Gestural Writing System for
Kinetic Museum Wikis},
BookTitle      = {Proceedings of the International Workshop On User
Experience in e-Learning and Augmented Technologies in
Education (UXeLATE 2012) in Conjunction with ACM
Multimedia 2012, 29 October- 2 November, Nara, Japan},
Pages          = {7-12},
abstract       = {We introduce WikiNect as a kinetic museum information
system that allows museum visitors to give on-site
feedback about exhibitions. To this end, WikiNect
integrates three approaches to Human-Computer
Interaction (HCI): games with a purpose, wiki-based
collaborative writing and kinetic text-technologies.
Our aim is to develop kinetic technologies as a new
paradigm of HCI. They dispense with classical
interfaces (e.g., keyboards) in that they build on
non-contact modes of communication like gestures or
facial expressions as input displays. In this paper, we
introduce the notion of gestural writing as a kinetic
text-technology that underlies WikiNect to enable
museum visitors to communicate their feedback. The
basic idea is to explore sequences of gestures that
share the semantic expressivity of verbally manifested
speech acts. Our task is to identify such gestures that
are learnable on-site in the usage scenario of
WikiNect. This is done by referring to so-called
transient gestures as part of multimodal ensembles,
which are candidate gestures of the desired
functionality. },
keywords       = {wikinect},
website        = {http://www.researchgate.net/publication/262319200_WikiNect_towards_a_gestural_writing_system_for_kinetic_museum_wikis},
year           = 2012
}
• A. Lücking, S. Ptock, and K. Bergmann, “Assessing Agreement on Segmentations by Means of Staccato, the Segmentation Agreement Calculator according to Thomann,” in Gesture and Sign Language in Human-Computer Interaction and Embodied Communication, E. Efthimiou, G. Kouroupetroglou, and S. Fotina, Eds., Berlin and Heidelberg: Springer, 2012, vol. 7206, pp. 129-138.
[Abstract] [BibTeX]

Staccato, the Segmentation Agreement Calculator                    According to Thomann , is a software tool for assessing                    the degree of agreement of multiple segmentations of                    some time-related data (e.g., gesture phases or sign                    language constituents). The software implements an                    assessment procedure developed by Bruno Thomann and                    will be made publicly available. The article discusses                    the rationale of the agreement assessment procedure and                    points at future extensions of Staccato.
@InCollection{Luecking:Ptock:Bergmann:2012,
Author         = {Lücking, Andy and Ptock, Sebastian and Bergmann,
Kirsten},
Title          = {Assessing Agreement on Segmentations by Means of
Staccato, the Segmentation Agreement Calculator
according to Thomann},
BookTitle      = {Gesture and Sign Language in Human-Computer
Interaction and Embodied Communication},
Publisher      = {Springer},
Editor         = {Eleni Efthimiou and Georgios Kouroupetroglou and
Stavroula-Evita Fotina},
Volume         = {7206},
Series         = {Lecture Notes in Artificial Intelligence},
Pages          = {129-138},
abstract       = {Staccato, the Segmentation Agreement Calculator
According to Thomann , is a software tool for assessing
the degree of agreement of multiple segmentations of
some time-related data (e.g., gesture phases or sign
language constituents). The software implements an
assessment procedure developed by Bruno Thomann and
will be made publicly available. The article discusses
the rationale of the agreement assessment procedure and
points at future extensions of Staccato.},
booksubtitle   = {9th International Gesture Workshop, GW 2011, Athens,
Greece, May 2011, Revised Selected Papers},
year           = 2012
}
• A. Mehler, A. Lücking, and P. Menke, “Assessing Cognitive Alignment in Different Types of Dialog by means of a Network Model,” Neural Networks, vol. 32, pp. 159-164, 2012.
[Abstract] [BibTeX]

We present a network model of dialog lexica, called                    TiTAN (Two-layer Time-Aligned Network) series. TiTAN                    series capture the formation and structure of dialog                    lexica in terms of serialized graph representations.                    The dynamic update of TiTAN series is driven by the                    dialog-inherent timing of turn-taking. The model                    provides a link between neural, connectionist                    underpinnings of dialog lexica on the one hand and                    observable symbolic behavior on the other. On the                    neural side, priming and spreading activation are                    modeled in terms of TiTAN networking. On the symbolic                    side, TiTAN series account for cognitive alignment in                    terms of the structural coupling of the linguistic                    representations of dialog partners. This structural                    stance allows us to apply TiTAN in machine learning of                    data of dialogical alignment. In previous studies, it                    has been shown that aligned dialogs can be                    distinguished from non-aligned ones by means of TiTAN                    -based modeling. Now, we simultaneously apply this                    model to two types of dialog: task-oriented,                    experimentally controlled dialogs on the one hand and                    more spontaneous, direction giving dialogs on the                    other. We ask whether it is possible to separate                    aligned dialogs from non-aligned ones in a                    type-crossing way. Starting from a recent experiment                    (Mehler, Lücking, & Menke, 2011a), we show that such                    a type-crossing classification is indeed possible. This                    hints at a structural fingerprint left by alignment in                    networks of linguistic items that are routinely                    co-activated during conversation.
@Article{Mehler:Luecking:Menke:2012,
Author         = {Mehler, Alexander and Lücking, Andy and Menke, Peter},
Title          = {Assessing Cognitive Alignment in Different Types of
Dialog by means of a Network Model},
Journal        = {Neural Networks},
Volume         = {32},
Pages          = {159-164},
abstract       = {We present a network model of dialog lexica, called
TiTAN (Two-layer Time-Aligned Network) series. TiTAN
series capture the formation and structure of dialog
lexica in terms of serialized graph representations.
The dynamic update of TiTAN series is driven by the
dialog-inherent timing of turn-taking. The model
provides a link between neural, connectionist
underpinnings of dialog lexica on the one hand and
observable symbolic behavior on the other. On the
neural side, priming and spreading activation are
modeled in terms of TiTAN networking. On the symbolic
side, TiTAN series account for cognitive alignment in
terms of the structural coupling of the linguistic
representations of dialog partners. This structural
stance allows us to apply TiTAN in machine learning of
data of dialogical alignment. In previous studies, it
has been shown that aligned dialogs can be
distinguished from non-aligned ones by means of TiTAN
-based modeling. Now, we simultaneously apply this
model to two types of dialog: task-oriented,
experimentally controlled dialogs on the one hand and
more spontaneous, direction giving dialogs on the
other. We ask whether it is possible to separate
aligned dialogs from non-aligned ones in a
type-crossing way. Starting from a recent experiment
(Mehler, Lücking, \& Menke, 2011a), we show that such
a type-crossing classification is indeed possible. This
hints at a structural fingerprint left by alignment in
networks of linguistic items that are routinely
co-activated during conversation.},
doi            = {10.1016/j.neunet.2012.02.013},
website        = {http://www.sciencedirect.com/science/article/pii/S0893608012000421},
year           = 2012
}
• A. Lücking and T. Pfeiffer, “Framing Multimodal Technical Communication. With Focal Points in Speech-Gesture-Integration and Gaze Recognition,” in Handbook of Technical Communication, A. Mehler, L. Romary, and D. Gibbon, Eds., De Gruyter Mouton, 2012, vol. 8, pp. 591-644.
[BibTeX]

@InCollection{Luecking:Pfeiffer:2012,
Author         = {Lücking, Andy and Pfeiffer, Thies},
Title          = {Framing Multimodal Technical Communication. With Focal
Points in Speech-Gesture-Integration and Gaze
Recognition},
BookTitle      = {Handbook of Technical Communication},
Publisher      = {De Gruyter Mouton},
Editor         = {Alexander Mehler and Laurent Romary and Dafydd Gibbon},
Volume         = {8},
Series         = {Handbooks of Applied Linguistics},
Chapter        = {18},
Pages          = {591-644},
website        = {http://www.degruyter.com/view/books/9783110224948/9783110224948.591/9783110224948.591.xml},
year           = 2012
}
• P. Kubina, O. Abramov, and A. Lücking, “Barrier-free Communication,” in Handbook of Technical Communication, A. Mehler and L. Romary, Eds., Berlin and Boston: De Gruyter Mouton, 2012, vol. 8, pp. 645-706.
[BibTeX]

@InCollection{Kubina:Abramov:Luecking:2012,
Author         = {Kubina, Petra and Abramov, Olga and Lücking, Andy},
Title          = {Barrier-free Communication},
BookTitle      = {Handbook of Technical Communication},
Publisher      = {De Gruyter Mouton},
Editor         = {Alexander Mehler and Laurent Romary},
Volume         = {8},
Series         = {Handbooks of Applied Linguistics},
Chapter        = {19},
Pages          = {645-706},
editora        = {Dafydd Gibbon},
editoratype    = {collaborator},
website        = {http://www.degruyter.com/view/books/9783110224948/9783110224948.645/9783110224948.645.xml},
year           = 2012
}
• A. Lücking and A. Mehler, “What’s the Scope of the Naming Game? Constraints on Semantic Categorization,” in Proceedings of the 9th International Conference on the Evolution of Language, Kyoto, Japan, 2012, pp. 196-203.
[Abstract] [BibTeX]

The Naming Game (NG) has become a vivid research                    paradigm for simulation studies on language evolution                    and the establishment of naming conventions. Recently,                    NGs were used for reconstructing the creation of                    linguistic categories, most notably for color terms. We                    recap the functional principle of NGs and the latter                    Categorization Games (CGs) and evaluate them in the                    light of semantic data of linguistic categorization                    outside the domain of colors. This comparison reveals                    two specifics of the CG paradigm: Firstly, the emerging                    categories draw basically on the predefined topology of                    the learning domain. Secondly, the kind of categories                    that can be learnt in CGs is bound to                    context-independent intersective categories. This                    suggests that the NG and the CG focus on a special                    aspect of natural language categorization, which                    disregards context-sensitive categories used in a                    non-compositional manner.
@InProceedings{Luecking:Mehler:2012,
Author         = {Lücking, Andy and Mehler, Alexander},
Title          = {What's the Scope of the Naming Game? Constraints on
Semantic Categorization},
BookTitle      = {Proceedings of the 9th International Conference on the
Evolution of Language},
Pages          = {196-203},
abstract       = {The Naming Game (NG) has become a vivid research
paradigm for simulation studies on language evolution
and the establishment of naming conventions. Recently,
NGs were used for reconstructing the creation of
linguistic categories, most notably for color terms. We
recap the functional principle of NGs and the latter
Categorization Games (CGs) and evaluate them in the
light of semantic data of linguistic categorization
outside the domain of colors. This comparison reveals
two specifics of the CG paradigm: Firstly, the emerging
categories draw basically on the predefined topology of
the learning domain. Secondly, the kind of categories
that can be learnt in CGs is bound to
context-independent intersective categories. This
suggests that the NG and the CG focus on a special
aspect of natural language categorization, which
disregards context-sensitive categories used in a
non-compositional manner.},
url            = {http://kyoto.evolang.org/},
website        = {https://www.researchgate.net/publication/267858061_WHAT'S_THE_SCOPE_OF_THE_NAMING_GAME_CONSTRAINTS_ON_SEMANTIC_CATEGORIZATION},
year           = 2012
}

### 2011 (6)

• A. Lücking and A. Mehler, “A Model of Complexity Levels of Meaning Constitution in Simulation Models of Language Evolution,” International Journal of Signs and Semiotic Systems, vol. 1, iss. 1, pp. 18-38, 2011.
[Abstract] [BibTeX]

Currently, some simulative accounts exist within                    dynamic or evolutionary frameworks that are concerned                    with the development of linguistic categories within a                    population of language users. Although these studies                    mostly emphasize that their models are abstract, the                    paradigm categorization domain is preferably that of                    colors. In this paper, the authors argue that color                    adjectives are special predicates in both linguistic                    and metaphysical terms: semantically, they are                    intersective predicates, metaphysically, color                    properties can be empirically reduced onto purely                    physical properties. The restriction of categorization                    simulations to the color paradigm systematically leads                    to ignoring two ubiquitous features of natural language                    predicates, namely relativity and context-dependency.                    Therefore, the models for simulation models of                    linguistic categories are not able to capture the                    formation of categories like perspective-dependent                    predicates ‘left’ and ‘right’, subsective                    predicates like ‘small’ and ‘big’, or                    predicates that make reference to abstract objects like                    ‘I prefer this kind of situation’. The authors                    develop a three-dimensional grid of ascending                    complexity that is partitioned according to the                    semiotic triangle. They also develop a conceptual model                    in the form of a decision grid by means of which the                    complexity level of simulation models of linguistic                    categorization can be assessed in linguistic terms.
@Article{Luecking:Mehler:2011,
Author         = {Lücking, Andy and Mehler, Alexander},
Title          = {A Model of Complexity Levels of Meaning Constitution
in Simulation Models of Language Evolution},
Journal        = {International Journal of Signs and Semiotic Systems},
Volume         = {1},
Number         = {1},
Pages          = {18-38},
abstract       = {Currently, some simulative accounts exist within
dynamic or evolutionary frameworks that are concerned
with the development of linguistic categories within a
population of language users. Although these studies
mostly emphasize that their models are abstract, the
paradigm categorization domain is preferably that of
colors. In this paper, the authors argue that color
adjectives are special predicates in both linguistic
and metaphysical terms: semantically, they are
intersective predicates, metaphysically, color
properties can be empirically reduced onto purely
physical properties. The restriction of categorization
to ignoring two ubiquitous features of natural language
predicates, namely relativity and context-dependency.
Therefore, the models for simulation models of
linguistic categories are not able to capture the
formation of categories like perspective-dependent
predicates ‘left’ and ‘right’, subsective
predicates like ‘small’ and ‘big’, or
predicates that make reference to abstract objects like
‘I prefer this kind of situation’. The authors
develop a three-dimensional grid of ascending
complexity that is partitioned according to the
semiotic triangle. They also develop a conceptual model
in the form of a decision grid by means of which the
complexity level of simulation models of linguistic
categorization can be assessed in linguistic terms.},
year           = 2011
}
• A. Lücking, S. Ptock, and K. Bergmann, “Staccato: Segmentation Agreement Calculator,” in Gesture in Embodied Communication and Human-Computer Interaction. Proceedings of the 9th International Gesture Workshop, Athens, Greece, 2011, pp. 50-53.
[BibTeX]

@InProceedings{Luecking:Ptock:Bergmann:2011,
Author         = {Lücking, Andy and Ptock, Sebastian and Bergmann,
Kirsten},
Title          = {Staccato: Segmentation Agreement Calculator},
BookTitle      = {Gesture in Embodied Communication and Human-Computer
Interaction. Proceedings of the 9th International
Gesture Workshop},
Editor         = {Eleni Efthimiou and Georgios Kouroupetroglou},
Series         = {GW 2011},
Pages          = {50--53},
Publisher      = {National and Kapodistrian University of Athens},
month          = {5},
year           = 2011
}
• A. Mehler and A. Lücking, “A Graph Model of Alignment in Multilog,” in Proceedings of IEEE Africon 2011, Zambia, 2011.
[BibTeX]

@InProceedings{Mehler:Luecking:2011,
Author         = {Mehler, Alexander and Lücking, Andy},
Title          = {A Graph Model of Alignment in Multilog},
BookTitle      = {Proceedings of IEEE Africon 2011},
Series         = {IEEE Africon},
Organization   = {IEEE},
month          = {9},
website        = {https://www.researchgate.net/publication/267941012_A_Graph_Model_of_Alignment_in_Multilog},
year           = 2011
}
• A. Mehler, A. Lücking, and P. Menke, “From Neural Activation to Symbolic Alignment: A Network-Based Approach to the Formation of Dialogue Lexica,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN 2011), San Jose, California, July 31 — August 5, 2011.
[BibTeX]

@InProceedings{Mehler:Luecking:Menke:2011,
Author         = {Mehler, Alexander and Lücking, Andy and Menke, Peter},
Title          = {From Neural Activation to Symbolic Alignment: A
Network-Based Approach to the Formation of Dialogue
Lexica},
BookTitle      = {Proceedings of the International Joint Conference on
Neural Networks (IJCNN 2011), San Jose, California,
July 31 -- August 5},
website        = {{http://dx.doi.org/10.1109/IJCNN.2011.6033266}},
year           = 2011
}
• A. Lücking, O. Abramov, A. Mehler, and P. Menke, “The Bielefeld Jigsaw Map Game (JMG) Corpus,” in Abstracts of the Corpus Linguistics Conference 2011, Birmingham, 2011.
[BibTeX]

@InProceedings{Luecking:Abramov:Mehler:Menke:2011,
Author         = {Lücking, Andy and Abramov, Olga and Mehler, Alexander
and Menke, Peter},
Title          = {The Bielefeld Jigsaw Map Game (JMG) Corpus},
BookTitle      = {Abstracts of the Corpus Linguistics Conference 2011},
Series         = {CL2011},
pdf            = {http://www.birmingham.ac.uk/documents/college-artslaw/corpus/conference-archives/2011/Paper-137.pdf},
website        = {http://www.birmingham.ac.uk/research/activity/corpus/publications/conference-archives/2011-birmingham.aspx},
year           = 2011
}
• A. Mehler, A. Lücking, and P. Menke, “Assessing Lexical Alignment in Spontaneous Direction Dialogue Data by Means of a Lexicon Network Model,” in Proceedings of 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), February 20–26, Tokyo, Berlin/New York, 2011, pp. 368-379.
[BibTeX]

@InProceedings{Mehler:Luecking:Menke:2011:a,
Author         = {Mehler, Alexander and Lücking, Andy and Menke, Peter},
Title          = {Assessing Lexical Alignment in Spontaneous Direction
Dialogue Data by Means of a Lexicon Network Model},
BookTitle      = {Proceedings of 12th International Conference on
Intelligent Text Processing and Computational
Linguistics (CICLing), February 20--26, Tokyo},
Series         = {CICLing'11},
Pages          = {368-379},
Publisher      = {Springer},
year           = 2011
}

### 2010 (5)

• A. Mehler, A. Lücking, and P. Weiß, “A Network Model of Interpersonal Alignment,” Entropy, vol. 12, iss. 6, pp. 1440-1483, 2010.
[Abstract] [BibTeX]

In dyadic communication, both interlocutors adapt to                    each other linguistically, that is, they align                    interpersonally. In this article, we develop a                    framework for modeling interpersonal alignment in terms                    of the structural similarity of the interlocutors’                    dialog lexica. This is done by means of so-called                    two-layer time-aligned network series, that is, a                    time-adjusted graph model. The graph model is                    partitioned into two layers, so that the                    interlocutors’ lexica are captured as subgraphs of an                    encompassing dialog graph. Each constituent network of                    the series is updated utterance-wise. Thus, both the                    inherent bipartition of dyadic conversations and their                    gradual development are modeled. The notion of                    alignment is then operationalized within a quantitative                    model of structure formation based on the mutual                    information of the subgraphs that represent the                    interlocutor’s dialog lexica. By adapting and further                    developing several models of complex network theory, we                    show that dialog lexica evolve as a novel class of                    graphs that have not been considered before in the area                    of complex (linguistic) networks. Additionally, we show                    that our framework allows for classifying dialogs                    according to their alignment status. To the best of our                    knowledge, this is the first approach to measuring                    alignment in communication that explores the                    similarities of graph-like cognitive representations.
@Article{Mehler:Weiss:Luecking:2010:a,
Author         = {Mehler, Alexander and Lücking, Andy and Wei{\ss},
Petra},
Title          = {A Network Model of Interpersonal Alignment},
Journal        = {Entropy},
Volume         = {12},
Number         = {6},
Pages          = {1440-1483},
each other linguistically, that is, they align
framework for modeling interpersonal alignment in terms
of the structural similarity of the interlocutors’
dialog lexica. This is done by means of so-called
two-layer time-aligned network series, that is, a
time-adjusted graph model. The graph model is
partitioned into two layers, so that the
interlocutors’ lexica are captured as subgraphs of an
encompassing dialog graph. Each constituent network of
the series is updated utterance-wise. Thus, both the
inherent bipartition of dyadic conversations and their
gradual development are modeled. The notion of
alignment is then operationalized within a quantitative
model of structure formation based on the mutual
information of the subgraphs that represent the
interlocutor’s dialog lexica. By adapting and further
developing several models of complex network theory, we
show that dialog lexica evolve as a novel class of
graphs that have not been considered before in the area
of complex (linguistic) networks. Additionally, we show
that our framework allows for classifying dialogs
according to their alignment status. To the best of our
knowledge, this is the first approach to measuring
alignment in communication that explores the
similarities of graph-like cognitive representations.},
doi            = {10.3390/e12061440},
pdf            = {http://www.mdpi.com/1099-4300/12/6/1440/pdf},
website        = {http://www.mdpi.com/1099-4300/12/6/1440/},
year           = 2010
}
• A. Lücking and K. Bergmann, Introducing the Bielefeld SaGA CorpusEuropa Universität Viadrina Frankfurt/Oder: , 2010.
[Abstract] [BibTeX]

People communicate multimodally. Most prominently,                    they co-produce speech and gesture. How do they do                    that? Studying the interplay of both modalities has to                    be informed by empirically observed communication                    behavior. We present a corpus built of speech and                    gesture data gained in a controlled study. We describe                    1) the setting underlying the data; 2) annotation of                    the data; 3) reliability evalution methods and results;                    and 4) applications of the corpus in the research                    domain of speech and gesture alignment.
@Misc{Luecking:Bergmann:2010,
Author         = {Andy L\"{u}cking and Kirsten Bergmann},
Title          = {Introducing the {B}ielefeld {SaGA} Corpus},
HowPublished   = {Talk given at \textit{Gesture: Evolution, Brain, and
Linguistic Structures.} 4th Conference of the
International Society for Gesture Studies (ISGS).
abstract       = {People communicate multimodally. Most prominently,
they co-produce speech and gesture. How do they do
that? Studying the interplay of both modalities has to
be informed by empirically observed communication
behavior. We present a corpus built of speech and
gesture data gained in a controlled study. We describe
1) the setting underlying the data; 2) annotation of
the data; 3) reliability evalution methods and results;
and 4) applications of the corpus in the research
domain of speech and gesture alignment.},
day            = {28},
month          = {07},
year           = 2010
}
• A. Lücking, “A Semantic Account for Iconic Gestures,” in Gesture: Evolution, Brain, and Linguistic Structures, Europa Universität Viadrina Frankfurt/Oder, 2010, p. 210.
[BibTeX]

@InProceedings{Luecking:2010,
Author         = {Lücking, Andy},
Title          = {A Semantic Account for Iconic Gestures},
BookTitle      = {Gesture: Evolution, Brain, and Linguistic Structures},
Pages          = {210},
Organization   = {4th Conference of the International Society for
Gesture Studies (ISGS)},
keywords       = {own},
month          = {7},
website        = {http://pub.uni-bielefeld.de/publication/2318565},
year           = 2010
}
• A. Lücking, K. Bergmann, F. Hahn, S. Kopp, and H. Rieser, “The Bielefeld Speech and Gesture Alignment Corpus (SaGA),” in Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, Malta, 2010, pp. 92-98.
[Abstract] [BibTeX]

People communicate multimodally. Most prominently,                    they co-produce speech and gesture. How do they do                    that? Studying the interplay of both modalities has to                    be informed by empirically observed communication                    behavior. We present a corpus built of speech and                    gesture data gained in a controlled study. We describe                    1) the setting underlying the data; 2) annotation of                    the data; 3) reliability evalution methods and results;                    and 4) applications of the corpus in the research                    domain of speech and gesture alignment.
@InProceedings{Luecking:et:al:2010,
Author         = {Lücking, Andy and Bergmann, Kirsten and Hahn, Florian
and Kopp, Stefan and Rieser, Hannes},
Title          = {The Bielefeld Speech and Gesture Alignment Corpus
(SaGA)},
BookTitle      = {Multimodal Corpora: Advances in Capturing, Coding and
Analyzing Multimodality},
Pages          = {92--98},
Organization   = {7th International Conference for Language Resources
and Evaluation (LREC 2010)},
abstract       = {People communicate multimodally. Most prominently,
they co-produce speech and gesture. How do they do
that? Studying the interplay of both modalities has to
be informed by empirically observed communication
behavior. We present a corpus built of speech and
gesture data gained in a controlled study. We describe
1) the setting underlying the data; 2) annotation of
the data; 3) reliability evalution methods and results;
and 4) applications of the corpus in the research
domain of speech and gesture alignment.},
keywords       = {own},
month          = {5},
website        = {http://pub.uni-bielefeld.de/publication/2001935},
year           = 2010
}
• A. Mehler, P. Weiß, P. Menke, and A. Lücking, “Towards a Simulation Model of Dialogical Alignment,” in Proceedings of the 8th International Conference on the Evolution of Language (Evolang8), 14-17 April 2010, Utrecht, 2010, pp. 238-245.
[BibTeX]

@InProceedings{Mehler:Weiss:Menke:Luecking:2010,
Author         = {Mehler, Alexander and Wei{\ss}, Petra and Menke, Peter
and Lücking, Andy},
Title          = {Towards a Simulation Model of Dialogical Alignment},
BookTitle      = {Proceedings of the 8th International Conference on the
Evolution of Language (Evolang8), 14-17 April 2010,
Utrecht},
Pages          = {238-245},
website        = {http://www.let.uu.nl/evolang2010.nl/},
year           = 2010
}

### 2009 (1)

• A. Mehler and A. Lücking, “A Structural Model of Semiotic Alignment: The Classification of Multimodal Ensembles as a Novel Machine Learning Task,” in Proceedings of IEEE Africon 2009, September 23-25, Nairobi, Kenya, 2009.
[Abstract] [BibTeX]

In addition to the well-known linguistic alignment                    processes in dyadic communication – e.g., phonetic,                    syntactic, semantic alignment – we provide evidence                    for a genuine multimodal alignment process, namely                    semiotic alignment. Communicative elements from                    different modalities 'routinize into' cross-modal                    'super-signs', which we call multimodal ensembles.                    Computational models of human communication are in need                    of expressive models of multimodal ensembles. In this                    paper, we exemplify semiotic alignment by means of                    empirical examples of the building of multimodal                    ensembles. We then propose a graph model of multimodal                    dialogue that is expressive enough to capture                    multimodal ensembles. In line with this model, we                    define a novel task in machine learning with the aim of                    training classifiers that can detect semiotic alignment                    in dialogue. This model is in support of approaches                    which need to gain insights into realistic                    human-machine communication.
@InProceedings{Mehler:Luecking:2009,
Author         = {Mehler, Alexander and Lücking, Andy},
Title          = {A Structural Model of Semiotic Alignment: The
Classification of Multimodal Ensembles as a Novel
BookTitle      = {Proceedings of IEEE Africon 2009, September 23-25,
Nairobi, Kenya},
Publisher      = {IEEE},
abstract       = {In addition to the well-known linguistic alignment
processes in dyadic communication – e.g., phonetic,
syntactic, semantic alignment – we provide evidence
for a genuine multimodal alignment process, namely
semiotic alignment. Communicative elements from
different modalities 'routinize into' cross-modal
'super-signs', which we call multimodal ensembles.
Computational models of human communication are in need
of expressive models of multimodal ensembles. In this
paper, we exemplify semiotic alignment by means of
empirical examples of the building of multimodal
ensembles. We then propose a graph model of multimodal
dialogue that is expressive enough to capture
multimodal ensembles. In line with this model, we
define a novel task in machine learning with the aim of
training classifiers that can detect semiotic alignment
in dialogue. This model is in support of approaches
which need to gain insights into realistic
human-machine communication.},
year           = 2009
}

### 2008 (1)

• A. Lücking, A. Mehler, and P. Menke, “Taking Fingerprints of Speech-and-Gesture Ensembles: Approaching Empirical Evidence of Intrapersonal Alignment in Multimodal Communication,” in LONDIAL 2008: Proceedings of the 12th Workshop on the Semantics and Pragmatics of Dialogue (SEMDIAL), King’s College London, 2008, p. 157–164.
[BibTeX]

@InProceedings{Luecking:Mehler:Menke:2008,
Author         = {Lücking, Andy and Mehler, Alexander and Menke, Peter},
Title          = {Taking Fingerprints of Speech-and-Gesture Ensembles:
Approaching Empirical Evidence of Intrapersonal
Alignment in Multimodal Communication},
BookTitle      = {LONDIAL 2008: Proceedings of the 12th Workshop on the
Semantics and Pragmatics of Dialogue (SEMDIAL)},
Pages          = {157–164},
month          = {June 2–4},
website        = {https://www.researchgate.net/publication/237305375_Taking_Fingerprints_of_Speech-and-Gesture_Ensembles_Approaching_Empirical_Evidence_of_Intrapersonal_Alignment_in_Multimodal_Communication},
year           = 2008
}

### 2007 (2)

• C. Borr, M. Hielscher-Fastabend, and A. Lücking, “Reliability and Validity of Cervical Auscultation,” Dysphagia, vol. 22, pp. 225-234, 2007.
[Abstract] [BibTeX]

We conducted a two-part study that contributes to the                    discussion about cervical auscultation (CA) as a                    scientifically justifiable and medically useful tool to                    identify patients with a high risk of                    aspiration/penetration. We sought to determine (1)                    acoustic features that mark a deglutition act as                    dysphagic; (2) acoustic changes in healthy older                    deglutition profiles compared with those of younger                    adults; (3) the correctness and concordance of rater                    judgments based on CA; and (4) if education in CA                    improves individual reliability. The first part of the                    study focused on a comparison of the swallow morphology                    of dysphagic as opposed to healthy subjects�                    deglutition in terms of structure properties of the                    pharyngeal phase of deglutition. We obtained the                    following results. The duration of deglutition apnea is                    significantly higher in the older group than in the                    younger one. Comparing the younger group and the                    dysphagic group we found significant differences in                    duration of deglutition apnea, onset time, and number                    of gulps. Just one parameter, number of gulps,                    distinguishes significantly between the older and the                    dysphagic groups. The second part of the study aimed at                    evaluating the reliability of CA in detecting dysphagia                    measured as the concordance and the correctness of CA                    experts in classifying swallowing sounds. The                    interrater reliability coefficient AC1 resulted in a                    value of 0.46, which is to be interpreted as fair                    agreement. Furthermore, we found that comparison with                    radiologically defined aspiration/penetration for the                    group of experts (speech and language therapists)                    yielded 70% specificity and 94% sensitivity. We                    conclude that the swallowing sounds contain audible                    cues that should, in principle, permit reliable                    classification and view CA as an early warning system                    for identifying patients with a high risk of                    aspiration/penetration; however, it is not appropriate                    as a stand-alone tool.
@Article{Borr:Luecking:Hierlscher:2007,
Author         = {Borr, Christiane and Hielscher-Fastabend, Martina and
Lücking, Andy},
Title          = {Reliability and Validity of Cervical Auscultation},
Journal        = {Dysphagia},
Volume         = {22},
Pages          = {225--234},
abstract       = {We conducted a two-part study that contributes to the
discussion about cervical auscultation (CA) as a
scientifically justifiable and medically useful tool to
identify patients with a high risk of
aspiration/penetration. We sought to determine (1)
acoustic features that mark a deglutition act as
dysphagic; (2) acoustic changes in healthy older
deglutition profiles compared with those of younger
adults; (3) the correctness and concordance of rater
judgments based on CA; and (4) if education in CA
improves individual reliability. The first part of the
study focused on a comparison of the swallow morphology
of dysphagic as opposed to healthy subjects�
deglutition in terms of structure properties of the
pharyngeal phase of deglutition. We obtained the
following results. The duration of deglutition apnea is
significantly higher in the older group than in the
younger one. Comparing the younger group and the
dysphagic group we found significant differences in
duration of deglutition apnea, onset time, and number
of gulps. Just one parameter, number of gulps,
distinguishes significantly between the older and the
dysphagic groups. The second part of the study aimed at
evaluating the reliability of CA in detecting dysphagia
measured as the concordance and the correctness of CA
experts in classifying swallowing sounds. The
interrater reliability coefficient AC1 resulted in a
value of 0.46, which is to be interpreted as fair
agreement. Furthermore, we found that comparison with
group of experts (speech and language therapists)
yielded 70% specificity and 94% sensitivity. We
conclude that the swallowing sounds contain audible
cues that should, in principle, permit reliable
classification and view CA as an early warning system
for identifying patients with a high risk of
aspiration/penetration; however, it is not appropriate
as a stand-alone tool.},
doi            = {10.1007/s00455-007-9078-3},
issue          = {3},
pdf            = {http://www.shkim.eu/cborr/ca5manuscript.pdf},
publisher      = {Springer New York},
url            = {http://dx.doi.org/10.1007/s00455-007-9078-3},
year           = 2007
}
• A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and M. Staudacher, Locating Objects by Pointing, 2007.
[BibTeX]

@Misc{Kranstedt:et:al:2007,
Author         = {Kranstedt, Alfred and Lücking, Andy and Pfeiffer,
Thies and Rieser, Hannes and Staudacher, Marc},
Title          = {Locating Objects by Pointing},
HowPublished   = {3rd International Conference of the International
Society for Gesture Studies. Evanston, IL, USA},
keywords       = {own},
month          = {6},
year           = 2007
}

### 2006 (6)

• A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and M. Staudacher, “Measuring and Reconstructing Pointing in Visual Contexts,” in brandial ’06 — Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, Potsdam, 2006, pp. 82-89.
[Abstract] [BibTeX]

We describe an experiment to gather original data on                    geometrical aspects of pointing. In particular, we are                    focusing upon the concept of the pointing cone, a                    geometrical model of a pointing’s extension. In our                    setting we employed methodological and technical                    procedures of a new type to integrate data from                    annotations as well as from tracker recordings. We                    combined exact information on position and orientation                    with rater’s classifications. Our first results seem                    to challenge classical linguistic and philosophical                    theories of demonstration in that they advise to                    separate pointings from reference.
@InProceedings{Kranstedt:et:al:2006:c,
Author         = {Kranstedt, Alfred and Lücking, Andy and Pfeiffer,
Thies and Rieser, Hannes and Staudacher, Marc},
Title          = {Measuring and Reconstructing Pointing in Visual
Contexts},
BookTitle      = {brandial '06 -- Proceedings of the 10th Workshop on
the Semantics and Pragmatics of Dialogue},
Editor         = {David Schlangen and Raquel Fernández},
Pages          = {82--89},
Publisher      = {Universit{\"a}tsverlag Potsdam},
abstract       = {We describe an experiment to gather original data on
geometrical aspects of pointing. In particular, we are
focusing upon the concept of the pointing cone, a
geometrical model of a pointing’s extension. In our
setting we employed methodological and technical
procedures of a new type to integrate data from
annotations as well as from tracker recordings. We
combined exact information on position and orientation
with rater’s classifications. Our first results seem
to challenge classical linguistic and philosophical
theories of demonstration in that they advise to
separate pointings from reference.},
keywords       = {own},
month          = {9},
website        = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.144.8472},
year           = 2006
}
• A. Lücking, H. Rieser, and M. Staudacher, “Multi-modal Integration for Gesture and Speech,” in brandial ’06 — Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, Potsdam, 2006, pp. 106-113.
[Abstract] [BibTeX]

Demonstratives, in particular gestures that 'only'                    accompany speech, are not a big issue in current                    theories of grammar. If we deal with gestures, fixing                    their function is one big problem, the other one is how                    to integrate the representations originating from                    different channels and, ultimately, how to determine                    their composite meanings. The growing interest in                    multi-modal settings, computer simulations,                    human-machine interfaces and VR-applications increases                    the need for theories of multi-modal structures and                    events. In our workshop-contribution we focus on the                    integration of multi-modal contents and investigate                    different approaches dealing with this problem such as                    Johnston et al. (1997) and Johnston (1998), Johnston                    and Bangalore (2000), Chierchia (1995), Asher (2005),                    and Rieser (2005).
@InProceedings{Luecking:Rieser:Staudacher:2006:a,
Author         = {Lücking, Andy and Rieser, Hannes and Staudacher, Marc},
Title          = {Multi-modal Integration for Gesture and Speech},
BookTitle      = {brandial '06 -- Proceedings of the 10th Workshop on
the Semantics and Pragmatics of Dialogue},
Editor         = {David Schlangen and Raquel Fernández},
Pages          = {106--113},
Publisher      = {Universit{\"a}tsverlag Potsdam},
abstract       = {Demonstratives, in particular gestures that 'only'
accompany speech, are not a big issue in current
theories of grammar. If we deal with gestures, fixing
their function is one big problem, the other one is how
to integrate the representations originating from
different channels and, ultimately, how to determine
their composite meanings. The growing interest in
multi-modal settings, computer simulations,
human-machine interfaces and VR-applications increases
the need for theories of multi-modal structures and
events. In our workshop-contribution we focus on the
integration of multi-modal contents and investigate
different approaches dealing with this problem such as
Johnston et al. (1997) and Johnston (1998), Johnston
and Bangalore (2000), Chierchia (1995), Asher (2005),
and Rieser (2005).},
keywords       = {own},
month          = {9},
year           = 2006
}
• A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and I. Wachsmuth, “Deictic Object Reference in Task-oriented Dialogue,” in Situated Communication, G. Rickheit and I. Wachsmuth, Eds., Berlin: De Gruyter Mouton, 2006, pp. 155-207.
[Abstract] [BibTeX]

This chapter presents an original approach towards a                    detailed understanding of the usage of pointing                    gestures accompanying referring expressions. This                    effort is undertaken in the context of human-machine                    interaction integrating empirical studies, theory of                    grammar and logics, and simulation techniques. In                    particular, we take steps to classify the role of                    pointing in deictic expressions and to model the                    focussed area of pointing gestures, the so-called                    pointing cone. This pointing cone serves as a central                    concept in a formal account of multi-modal integration                    at the linguistic speech-gesture interface as well as                    in a computational model of processing multi-modal                    deictic expressions.
@InCollection{Kranstedt:et:al:2006:b,
Author         = {Kranstedt, Alfred and Lücking, Andy and Pfeiffer,
Thies and Rieser, Hannes and Wachsmuth, Ipke},
Title          = {Deictic Object Reference in Task-oriented Dialogue},
BookTitle      = {Situated Communication},
Publisher      = {De Gruyter Mouton},
Editor         = {Gert Rickheit and Ipke Wachsmuth},
Pages          = {155--207},
abstract       = {This chapter presents an original approach towards a
detailed understanding of the usage of pointing
gestures accompanying referring expressions. This
effort is undertaken in the context of human-machine
interaction integrating empirical studies, theory of
grammar and logics, and simulation techniques. In
particular, we take steps to classify the role of
pointing in deictic expressions and to model the
focussed area of pointing gestures, the so-called
pointing cone. This pointing cone serves as a central
concept in a formal account of multi-modal integration
at the linguistic speech-gesture interface as well as
in a computational model of processing multi-modal
deictic expressions.},
keywords       = {own},
website        = {http://pub.uni-bielefeld.de/publication/1894485},
year           = 2006
}
• A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and I. Wachsmuth, “Deixis: How to Determine Demonstrated Objects Using a Pointing Cone,” in Gesture in Human-Computer Interaction and Simulation, S. Gibet, N. Courty, and J. Kamp, Eds., Berlin: Springer, 2006, pp. 300-311.
[Abstract] [BibTeX]

We present a collaborative approach towards a detailed                    understanding of the usage of pointing gestures                    accompanying referring expressions. This effort is                    undertaken in the context of human-machine interaction                    integrating empirical studies, theory of grammar and                    logics, and simulation techniques. In particular, we                    attempt to measure the precision of the focussed area                    of a pointing gesture, the so-called pointing cone. The                    pointing cone serves as a central concept in a formal                    account of multi-modal integration at the linguistic                    speech-gesture interface as well as in a computational                    model of processing multi-modal deictic expressions.
@InCollection{Kranstedt:et:al:2006:a,
Author         = {Kranstedt, Alfred and Lücking, Andy and Pfeiffer,
Thies and Rieser, Hannes and Wachsmuth, Ipke},
Title          = {Deixis: How to Determine Demonstrated Objects Using a
Pointing Cone},
BookTitle      = {Gesture in Human-Computer Interaction and Simulation},
Publisher      = {Springer},
Editor         = {Sylvie Gibet and Nicolas Courty and Jean-Francois Kamp},
Pages          = {300--311},
abstract       = {We present a collaborative approach towards a detailed
understanding of the usage of pointing gestures
accompanying referring expressions. This effort is
undertaken in the context of human-machine interaction
integrating empirical studies, theory of grammar and
logics, and simulation techniques. In particular, we
attempt to measure the precision of the focussed area
of a pointing gesture, the so-called pointing cone. The
pointing cone serves as a central concept in a formal
account of multi-modal integration at the linguistic
speech-gesture interface as well as in a computational
model of processing multi-modal deictic expressions.},
anote          = {6th International Gesture Workshop, Berder Island,
France, 2005, Revised Selected Papers},
keywords       = {own},
year           = 2006
}
• T. Pfeiffer, A. Kranstedt, and A. Lücking, “Sprach-Gestik Experimente mit IADE, dem Interactive Augmented Data Explorer,” in Proceedings: Dritter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR, Koblenz, 2006.
[Abstract] [BibTeX]

Für die empirische Erforschung natürlicher                    menschlicher Kommunikation sind wir auf die Akquise und                    Auswertung umfangreicher Daten angewiesen. Die                    Modalitäten, über die sich Menschen ausdrücken                    können, sind sehr unterschiedlich - und genauso                    verschieden sind die Repräsentationen, mit denen                    sie für die Empirie verfügbar gemacht werden können.                    Für eine Untersuchung des Zeigeverhaltens bei der                    Referenzierung von Objekten haben wir mit IADE ein                    Framework für die Aufzeichnung, Analyse und                    Resimulation von Sprach-Gestik Daten entwickelt. Mit                    dessen Hilfe können wir für unsere Forschung                    entscheidende Fortschritte in der linguistischen                    Experimentalmethodik machen.
@InProceedings{Pfeiffer:Kranstedt:Luecking:2006,
Author         = {Pfeiffer, Thies and Kranstedt, Alfred and Lücking,
Andy},
Title          = {Sprach-Gestik Experimente mit IADE, dem Interactive
Augmented Data Explorer},
BookTitle      = {Proceedings: Dritter Workshop Virtuelle und Erweiterte
Realit{\"a}t der GI-Fachgruppe VR/AR},
abstract       = {Für die empirische Erforschung natürlicher
menschlicher Kommunikation sind wir auf die Akquise und
Auswertung umfangreicher Daten angewiesen. Die
Modalit{\"a}ten, über die sich Menschen ausdrücken
können, sind sehr unterschiedlich - und genauso
verschieden sind die Repr{\"a}sentationen, mit denen
sie für die Empirie verfügbar gemacht werden können.
Für eine Untersuchung des Zeigeverhaltens bei der
Referenzierung von Objekten haben wir mit IADE ein
Framework für die Aufzeichnung, Analyse und
Resimulation von Sprach-Gestik Daten entwickelt. Mit
dessen Hilfe können wir für unsere Forschung
entscheidende Fortschritte in der linguistischen
Experimentalmethodik machen.},
keywords       = {own},
website        = {http://pub.uni-bielefeld.de/publication/2426853},
year           = 2006
}
• A. Lücking, H. Rieser, and M. Staudacher, “SDRT and Multi-modal Situated Communication,” in brandial ’06 — Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, 2006, pp. 72-79.
[BibTeX]

@InProceedings{Luecking:Rieser:Stauchdacher:2006:b,
Author         = {Lücking, Andy and Rieser, Hannes and Staudacher, Marc},
Title          = {SDRT and Multi-modal Situated Communication},
BookTitle      = {brandial '06 -- Proceedings of the 10th Workshop on
the Semantics and Pragmatics of Dialogue},
Editor         = {David Schlangen and Raquel Fernández},
Pages          = {72--79},
Publisher      = {Universit{\"a}tsverlag Potsdam},
keywords       = {own},
month          = {9},
year           = 2006
}

### 2005 (2)

• A. Lücking and J. Stegmann, “Assessing Reliability on Annotations (2): Statistical Results for the \textscDeiKon Scheme,” SFB 360, Universität Bielefeld, 3, 2005.
[BibTeX]

@TechReport{Luecking:Stegmann:2005,
author =       {Andy L\"{u}cking and Jens Stegmann},
title =        {Assessing Reliability on Annotations (2):
Statistical Results for the \textsc{DeiKon} Scheme},
institution =  {SFB 360},
year =         2005,
number =       3,
url =
{http://www.sfb360.uni-bielefeld.de/reports/2005/2005-03.html}
}
• J. Stegmann and A. Lücking, “Assessing Reliability on Annotations (1): Theoretical Considerations,” SFB 360, Universität Bielefeld, 2, 2005.
[BibTeX]

@TechReport{Stegmann:Luecking:2005,
author =       {Jens Stegmann and Andy L\"{u}cking},
title =        {Assessing Reliability on Annotations (1):
Theoretical Considerations},
institution =  {SFB 360},
year =         2005,
number =       2,
url =
{http://www.sfb360.uni-bielefeld.de/reports/2005/2005-02.html}
}

### 2004 (1)

• A. Lücking, H. Rieser, and J. Stegmann, “Statistical Support for the Study of Structures in Multi-Modal Dialogue: Inter-Rater Agreement and Synchronization,” in Catalog ’04—Proceedings of the Eighth Workshop on the Semantics and Pragmatics of Dialogue, Barcelona, 2004, pp. 56-63.
[Abstract] [BibTeX]

We present a statistical approach to assess relations                    that hold among speech and pointing gestures in and                    between turns in task-oriented dialogue. The units                    quantified over are the time-stamps of the XML-based                    annotation of the digital video data. It was found                    that, on average, gesture strokes do not exceed, but                    are freely distributed over the time span of their                    linguistic affiliates. Further, the onset of the                    affiliate was observed to occur earlier than gesture                    initiation. Moreover, we found that gestures do obey                    certain appropriateness conditions and contribute                    semantic content ('gestures save words') as well.                    Gestures also seem to play a functional role wrt                    dialogue structure: There is evidence that gestures can                    contribute to the bundle of features making up a                    turn-taking signal. Some statistical results support a                    partitioning of the domain, which is also reflected in                    certain rating difficulties. However, our evaluation of                    the applied annotation scheme generally resulted in                    very good agreement
@InProceedings{Luecking:Rieser:Stegmann:2004,
Author         = {Lücking, Andy and Rieser, Hannes and Stegmann, Jens},
Title          = {Statistical Support for the Study of Structures in
Multi-Modal Dialogue: Inter-Rater Agreement and
Synchronization},
BookTitle      = {Catalog '04---Proceedings of the Eighth Workshop on
the Semantics and Pragmatics of Dialogue},
Editor         = {Jonathan Ginzburg and Enric Vallduví},
Pages          = {56--63},
Organization   = {Department of Translation and Philology, Universitat
Pompeu Fabra},
abstract       = {We present a statistical approach to assess relations
that hold among speech and pointing gestures in and
between turns in task-oriented dialogue. The units
quantified over are the time-stamps of the XML-based
annotation of the digital video data. It was found
that, on average, gesture strokes do not exceed, but
are freely distributed over the time span of their
linguistic affiliates. Further, the onset of the
affiliate was observed to occur earlier than gesture
initiation. Moreover, we found that gestures do obey
certain appropriateness conditions and contribute
semantic content ('gestures save words') as well.
Gestures also seem to play a functional role wrt
dialogue structure: There is evidence that gestures can
contribute to the bundle of features making up a
turn-taking signal. Some statistical results support a
partitioning of the domain, which is also reflected in
certain rating difficulties. However, our evaluation of
the applied annotation scheme generally resulted in
very good agreement},
keywords       = {own},
year           = 2004
}
• V. Ries and A. Lücking, ,” in Multilingual Resources and Multilingual Applications: Proceedings of the German Society for Computational Linguistics 2011, , pp. 207-210.
[Poster][BibTeX]

@InProceedings{Ries:Luecking:2011,
Author         = {Ries, Veronika and Lücking, Andy},
BookTitle      = {Multilingual Resources and Multilingual Applications:
Proceedings of the German Society for Computational
Linguistics 2011},
Pages          = {207--210},
}

Since April 2019 I am a (part-time) PostDoc research fellow at the Laboratoire de Linguistique Formelle (LLF) at the Université de Paris (formerly the Université Paris Diderot (Paris 7)). The fellowship involves my habilitation mainly on a semantic theory of quantified noun phrases which (unlike generalised quantifier theory) accounts for their clarification potential, anaphoric potential, gesture integration potential, and negation potential (expected 2020).

From January 2020 I am a (part-time) research assistant in the BIOfid project.

From February 2019 until January 2020 I was a (part-time) research assistant in the PLATO project.

In January 2011, I started to work as a research assistant at the Text Technology Lab at the Goethe University Frankfurt.

I studied linguistics, philosophy and German philology at Bielefeld University. During my studies, I worked as scientific assistant in several projects:

1. B1 “Speech-Gesture Alignment” in the Collaborative Research Center 673 “Alignment in Communication” (June 2006 to January 2011). In this project, I contributed in building the Speech-and-Gesture Alignment Corpus (SaGA). I also developed an account for the meaning of co-verbal iconic gestures and how they interact with speech (see dissertation.).
2. Linguistic Networks (September 2009 to December 2010).
3. Research Unit 437 “Text Technological Modelling of Information”, project A2 Secondary structuring of information and comparative analysis of discourse (Sekimo) (April 2006 to September 2006). In this short-term engagement, I supported the annotation of discourse structure and centering relations, and the assessment of reliability.
4. Project B3 “Deixis in Construction Dialogues” of the Collaborative Research Center 360 “Situated Artificial Communicators” (2005). In this project, I participated in investigating the role of pointing in demonstrative reference in task-oriented dialogue.

In 2011, I received my PhD in linguistics at Bielefeld University for my prolegomena for a linguistic theory of co-verbal iconic gesture. The work has been published in 2013 as “Ikonische Gesten. Grundzüge einer linguistischen Theorie”.

I am a member of the Deutsche Gesellschaft für Sprachwissenschaft (DGfS).

Besides my research activities, I am interested in typesetting with LaTeX. I am a member of the German TeX User Society (Deutsche Anwendervereinigung TeX, DANTE).