VAnnotatoR


VAnnotatoR is a system that massively supports the annotation of multimedia, multimodal networks of linguistic and object-related data in virtual 3D environments.

VAnnotatoR allows for segmenting and linking texts and images with other multimedia, multimodal data to ultimately create spatial hypertexts. It can be used to support concrete technical applications, e.g. the reconstruction of 3D scenes from texts as a preparatory step for the training of Text2Scene systems, or for applications in the digital humanities, such as the spatial reconstruction and linking of historical scenes or biographies with corresponding media content, such as images or 3D reconstructions of historical buildings. The entire system is interactively accessible with VR glasses in Virtual Reality (VR) and can be controlled and experienced accordingly. VAnnotatoR uses TextAnnotator as technological backbone and thus all its functions for annotation, task management, annotation evaluation as well as for collaborative and simultaneous processing of resources. Furthermore, VAnnotatoR uses TextImager to benefit from its NLP routines for automatic text analysis. VAnnotatoR is implemented on the basis of Unity3D and OpenVR and therefore runs with both systems and several VR glasses – it was tested on Oculus Rift, Oculus Rift S, Oculus Quest, HTC Vive and HTC Vive Cosmos under Windows 10.

Left: Abrami, Spiekermann, and Mehler: (2019); Right: Abrami et al.: (2020)

VAnnotatoR TeamAbout VAnnotatoRUsing VAnnotatoR

Total: 9

2020 (4)

  • [PDF] [https://doi.org/10.1145/3372923.3404791] [DOI] G. Abrami, A. Henlein, A. Kett, and A. Mehler, “Text2SceneVR: Generating Hypertexts with VAnnotatoR as a Pre-processing Step for Text2Scene Systems,” in Proceedings of the 31st ACM Conference on Hypertext and Social Media, New York, NY, USA, 2020, p. 177–186.
    [BibTeX]

    @InProceedings{Abrami:Henlein:Kett:Mehler:2020,
        author = {Abrami, Giuseppe and Henlein, Alexander and Kett, Attila and Mehler, Alexander},
        title = {{Text2SceneVR}: Generating Hypertexts with VAnnotatoR as a Pre-processing Step for Text2Scene Systems},
        booktitle = {Proceedings of the 31st ACM Conference on Hypertext and Social Media},
        series = {HT ’20}, 
        year = {2020},
        location = {Virtual Event, USA}, 
        isbn = {9781450370981},
        publisher = {Association for Computing Machinery},
        address = {New York, NY, USA},
        url = {https://doi.org/10.1145/3372923.3404791},
        doi = {10.1145/3372923.3404791},
        pages = {177–186},
        numpages = {10},
        pdf={https://dl.acm.org/doi/pdf/10.1145/3372923.3404791}
    }
  • [PDF] [https://www.aclweb.org/anthology/2020.isa-1.4] A. Henlein, G. Abrami, A. Kett, and A. Mehler, “Transfer of ISOSpace into a 3D Environment for Annotations and Applications,” in Proceedings of the 16th Joint ACL – ISO Workshop on Interoperable Semantic Annotation, Marseille, 2020, pp. 32-35.
    [Abstract] [BibTeX]

    People's visual perception is very pronounced and therefore it is usually no problem for them to describe the space around them in words. Conversely, people also have no problems imagining a concept of a described space. In recent years many efforts have been made to develop a linguistic concept for spatial and spatial-temporal relations. However, the systems have not really caught on so far, which in our opinion is due to the complex models on which they are based and the lack of available training data and automated taggers. In this paper we describe a project to support spatial annotation, which could facilitate annotation by its many functions, but also enrich it with many more information. This is to be achieved by an extension by means of a VR environment, with which spatial relations can be better visualized and connected with real objects. And we want to use the available data to develop a new state-of-the-art tagger and thus lay the foundation for future systems such as improved text understanding for Text2Scene.
    @InProceedings{Henlein:et:al:2020,
      Author         = {Henlein, Alexander and Abrami, Giuseppe and Kett, Attila and Mehler, Alexander},
      Title          = {Transfer of ISOSpace into a 3D Environment for Annotations and Applications},
      booktitle      = {Proceedings of the 16th Joint ACL - ISO Workshop on Interoperable Semantic Annotation},
      month          = {May},
      year           = {2020},
      address        = {Marseille},
      publisher      = {European Language Resources Association},
      pages     = {32--35},
      abstract  = {People's visual perception is very pronounced and therefore it is usually no problem for them to describe the space around them in words. Conversely, people also have no problems imagining a concept of a described space. In recent years many efforts have been made to develop a linguistic concept for spatial and spatial-temporal relations. However, the systems have not really caught on so far, which in our opinion is due to the complex models on which they are based and the lack of available training data and automated taggers. In this paper we describe a project to support spatial annotation, which could facilitate annotation by its many functions, but also enrich it with many more information. This is to be achieved by an extension by means of a VR environment, with which spatial relations can be better visualized and connected with real objects. And we want to use the available data to develop a new state-of-the-art tagger and thus lay the foundation for future systems such as improved text understanding for Text2Scene.},
      url       = {https://www.aclweb.org/anthology/2020.isa-1.4},
      pdf      = {http://www.lrec-conf.org/proceedings/lrec2020/workshops/ISA16/pdf/2020.isa-1.4.pdf}
    }
  • [https://doi.org/10.1007/978-3-030-49695-1_20] [DOI] V. Kühn, G. Abrami, and A. Mehler, “WikNectVR: A Gesture-Based Approach for Interacting in Virtual Reality Based on WikNect and Gestural Writing,” in Virtual, Augmented and Mixed Reality. Design and Interaction – 12th International Conference, VAMR 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19-24, 2020, Proceedings, Part I, 2020, pp. 299-312.
    [BibTeX]

    @inproceedings{Kuehn:Abrami:Mehler:2020,
      author    = {Vincent K{\"{u}}hn and Giuseppe Abrami and Alexander Mehler},
      editor    = {Jessie Y. C. Chen and Gino Fragomeni},
      title     = {WikNectVR: {A} Gesture-Based Approach for Interacting in Virtual Reality Based on WikNect and Gestural Writing},
      booktitle = {Virtual, Augmented and Mixed Reality. Design and Interaction - 12th International Conference, {VAMR} 2020, Held as Part of the 22nd {HCI} International Conference, {HCII} 2020, Copenhagen, Denmark, July 19-24, 2020, Proceedings, Part {I}},
      series    = {Lecture Notes in Computer Science},
      volume    = {12190},
      pages     = {299--312},
      publisher = {Springer},
      year      = {2020},
      url       = {https://doi.org/10.1007/978-3-030-49695-1_20},
      doi       = {10.1007/978-3-030-49695-1_20},
      timestamp = {Tue, 14 Jul 2020 10:55:57 +0200},
      biburl    = {https://dblp.org/rec/conf/hci/KuhnAM20.bib},
      bibsource = {dblp computer science bibliography, https://dblp.org}
    }
  • [https://www.routledge.com/New-Perspectives-on-Virtual-and-Augmented-Reality-Finding-New-Ways-to-Teach/Daniela/p/book/9780367432119] G. Abrami, A. Mehler, C. Spiekermann, A. Kett, S. Lööck, and L. Schwarz, “Educational Technologies in the area of ubiquitous historical computing in virtual reality,” in New Perspectives on Virtual and Augmented Reality: Finding New Ways to Teach in a Transformed Learning Environment, L. Daniela, Ed., Taylor & Francis, 2020.
    [Abstract] [BibTeX]

    At ever shorter intervals, new technologies are being developed that are opening up more and more areas of application. This regards, for example, Virtual Reality (VR) and Augmented Reality (AR) devices. In addition to the private sector, the public and education sectors, which already make intensive use of these devices, benefit from these technologies. However, especially in the field of historical education, there are not many frameworks for generating immersive virtual environments that can be used flexibly enough. This chapter addresses this gap by means of VAnnotatoR. VAnnotatoR is a versatile framework for the creation and use of virtual environments that serve to model historical processes in historical education. The paper describes the building blocks of VAnnotatoR and describes applications in historical education.
    @InBook{Abrami:et:al:2020,
        author="Abrami, Giuseppe and Mehler, Alexander and Spiekermann, Christian and Kett, Attila and L{\"o}{\"o}ck, Simon and Schwarz, Lukas",
        editor="Daniela, Linda",
        title="Educational Technologies in the area of ubiquitous historical computing in virtual reality",
        bookTitle="New Perspectives on Virtual and Augmented Reality: Finding New Ways to Teach in a Transformed Learning Environment",
        year="2020",
        publisher="Taylor \& Francis",
        abstract="At ever shorter intervals, new technologies are being developed that are opening up more and more areas of application. This regards, for example, Virtual Reality (VR) and Augmented Reality (AR) devices. In addition to the private sector, the public and education sectors, which already make intensive use of these devices, benefit from these technologies. However, especially in the field of historical education, there are not many frameworks for generating immersive virtual environments that can be used flexibly enough. This chapter addresses this gap by means of VAnnotatoR. VAnnotatoR is a versatile framework for the creation and use of virtual environments that serve to model historical processes in historical education. The paper describes the building blocks of VAnnotatoR and describes applications in historical education.",
        isbn={978-0-367-43211-9},
        url={https://www.routledge.com/New-Perspectives-on-Virtual-and-Augmented-Reality-Finding-New-Ways-to-Teach/Daniela/p/book/9780367432119}
    }

2019 (3)

  • A. Mehler and G. Abrami, “VAnnotatoR: A framework for the multimodal reconstruction of historical situations and spaces,” in Proceedings of the Time Machine Conference, Dresden, Germany, October 10-11 2019.
    [Poster][BibTeX]

    @inproceedings{Mehler:Abrami:2019,
        author = {Mehler, Alexander and Abrami, Giuseppe},
        title = {{VAnnotatoR}: A framework for the multimodal reconstruction of historical situations and spaces},
        booktitle = {Proceedings of the Time Machine Conference},
        year = {2019},
        date = {October 10-11},
        address = {Dresden, Germany},
        poster={https://www.texttechnologylab.org/wp-content/uploads/2019/09/TimeMachineConference.pdf}
    }
  • [PDF] G. Abrami, A. Mehler, and C. Spiekermann, “Graph-based Format for Modeling Multimodal Annotations in Virtual Reality by Means of VAnnotatoR,” in Proceedings of the 21th International Conference on Human-Computer Interaction, HCII 2019, Cham, 2019, pp. 351-358.
    [Abstract] [BibTeX]

    Projects in the field of Natural Language Processing (NLP), the Digital Humanities (DH) and related disciplines dealing with machine learning of complex relationships between data objects need annotations to obtain sufficiently rich training and test sets. The visualization of such data sets and their underlying Human Computer Interaction (HCI) are perennial problems of computer science. However, despite some success stories, the clarity of information presentation and the flexibility of the annotation process may decrease with the complexity of the underlying data objects and their relationships. In order to face this problem, the so-called VAnnotatoR was developed, as a flexible annotation tool using 3D glasses and augmented reality devices, which enables annotation and visualization in three-dimensional virtual environments. In addition, multimodal objects are annotated and visualized within a graph-based approach.
    @InProceedings{Abrami:Mehler:Spiekermann:2019,
      Author         = {Abrami, Giuseppe and Mehler, Alexander and Spiekermann, Christian},
      Title          = {{Graph-based Format for Modeling Multimodal Annotations in Virtual Reality by Means of VAnnotatoR}},
      BookTitle      = {Proceedings of the 21th International Conference on Human-Computer Interaction, HCII 2019},
      Series         = {HCII 2019},
      location       = {Orlando, Florida, USA},
      editor   = {Stephanidis, Constantine and Antona, Margherita},
      month     = {July},
    publisher="Springer International Publishing",
    address="Cham",
    pages="351--358",
    abstract="Projects in the field of Natural Language Processing (NLP), the Digital Humanities (DH) and related disciplines dealing with machine learning of complex relationships between data objects need annotations to obtain sufficiently rich training and test sets. The visualization of such data sets and their underlying Human Computer Interaction (HCI) are perennial problems of computer science. However, despite some success stories, the clarity of information presentation and the flexibility of the annotation process may decrease with the complexity of the underlying data objects and their relationships. In order to face this problem, the so-called VAnnotatoR was developed, as a flexible annotation tool using 3D glasses and augmented reality devices, which enables annotation and visualization in three-dimensional virtual environments. In addition, multimodal objects are annotated and visualized within a graph-based approach.",
    isbn="978-3-030-30712-7",
    pdf ={https://link.springer.com/content/pdf/10.1007%2F978-3-030-30712-7_44.pdf},
      year           = 2019
    }
  • [PDF] G. Abrami, C. Spiekermann, and A. Mehler, “VAnnotatoR: Ein Werkzeug zur Annotation multimodaler Netzwerke in dreidimensionalen virtuellen Umgebungen,” in Proceedings of the 6th Digital Humanities Conference in the German-speaking Countries, DHd 2019, 2019.
    [Poster][BibTeX]

    @InProceedings{Abrami:Spiekermann:Mehler:2019,
      Author         = {Abrami, Giuseppe and Spiekermann, Christian and Mehler, Alexander},
      Title          = {{VAnnotatoR: Ein Werkzeug zur Annotation multimodaler Netzwerke in dreidimensionalen virtuellen Umgebungen}},
      BookTitle      = {Proceedings of the 6th Digital Humanities Conference in the German-speaking Countries, DHd 2019},
      Series   = {DHd 2019}, 
      pdf     = {https://www.texttechnologylab.org/wp-content/uploads/2019/04/Preprint_VAnnotatoR_DHd2019.pdf},
      poster   = {https://www.texttechnologylab.org/wp-content/uploads/2019/04/DHDVAnnotatoRPoster.pdf},  
    location       = {Frankfurt, Germany},
      year           = 2019
    }

2018 (2)

  • [PDF] A. Mehler, G. Abrami, C. Spiekermann, and M. Jostock, “VAnnotatoR: A Framework for Generating Multimodal Hypertexts,” in Proceedings of the 29th ACM Conference on Hypertext and Social Media, New York, NY, USA, 2018.
    [BibTeX]

    @InProceedings{Mehler:Abrami:Spiekermann:Jostock:2018,
        author = {Mehler, Alexander and Abrami, Giuseppe and Spiekermann, Christian and Jostock, Matthias},
        title = {{VAnnotatoR}: {A} Framework for Generating Multimodal Hypertexts},
        booktitle = {Proceedings of the 29th ACM Conference on Hypertext and Social Media},
        series = {Proceedings of the 29th ACM Conference on Hypertext and Social Media (HT '18)},
        year = {2018},
        location = {Baltimore, Maryland},
        publisher = {ACM},
        address = {New York, NY, USA},
        pdf = {http://delivery.acm.org/10.1145/3210000/3209572/p150-mehler.pdf}
    }
  • [PDF] C. Spiekermann, G. Abrami, and A. Mehler, “VAnnotatoR: a Gesture-driven Annotation Framework for Linguistic and Multimodal Annotation,” in Proceedings of the Annotation, Recognition and Evaluation of Actions (AREA 2018) Workshop, 2018.
    [BibTeX]

    @InProceedings{Spiekerman:Abrami:Mehler:2018,
      Author         = {Christian Spiekermann and Giuseppe Abrami and
                       Alexander Mehler},
      Title          = {{VAnnotatoR}: a Gesture-driven Annotation Framework
                       for Linguistic and Multimodal Annotation},
      BookTitle      = {Proceedings of the Annotation, Recognition and
                       Evaluation of Actions (AREA 2018) Workshop},
      Series         = {AREA},
      location       = {Miyazaki, Japan},
      pdf            = {https://www.texttechnologylab.org/wp-content/uploads/2018/03/VAnnotatoR.pdf},
      year           = 2018
    }

Total: 2

2019 (1)

  • [PDF] G. Abrami, A. Mehler, and C. Spiekermann, “Graph-based Format for Modeling Multimodal Annotations in Virtual Reality by Means of VAnnotatoR,” in Proceedings of the 21th International Conference on Human-Computer Interaction, HCII 2019, Cham, 2019, pp. 351-358.
    [Abstract] [BibTeX]

    Projects in the field of Natural Language Processing (NLP), the Digital Humanities (DH) and related disciplines dealing with machine learning of complex relationships between data objects need annotations to obtain sufficiently rich training and test sets. The visualization of such data sets and their underlying Human Computer Interaction (HCI) are perennial problems of computer science. However, despite some success stories, the clarity of information presentation and the flexibility of the annotation process may decrease with the complexity of the underlying data objects and their relationships. In order to face this problem, the so-called VAnnotatoR was developed, as a flexible annotation tool using 3D glasses and augmented reality devices, which enables annotation and visualization in three-dimensional virtual environments. In addition, multimodal objects are annotated and visualized within a graph-based approach.
    @InProceedings{Abrami:Mehler:Spiekermann:2019,
      Author         = {Abrami, Giuseppe and Mehler, Alexander and Spiekermann, Christian},
      Title          = {{Graph-based Format for Modeling Multimodal Annotations in Virtual Reality by Means of VAnnotatoR}},
      BookTitle      = {Proceedings of the 21th International Conference on Human-Computer Interaction, HCII 2019},
      Series         = {HCII 2019},
      location       = {Orlando, Florida, USA},
      editor   = {Stephanidis, Constantine and Antona, Margherita},
      month     = {July},
    publisher="Springer International Publishing",
    address="Cham",
    pages="351--358",
    abstract="Projects in the field of Natural Language Processing (NLP), the Digital Humanities (DH) and related disciplines dealing with machine learning of complex relationships between data objects need annotations to obtain sufficiently rich training and test sets. The visualization of such data sets and their underlying Human Computer Interaction (HCI) are perennial problems of computer science. However, despite some success stories, the clarity of information presentation and the flexibility of the annotation process may decrease with the complexity of the underlying data objects and their relationships. In order to face this problem, the so-called VAnnotatoR was developed, as a flexible annotation tool using 3D glasses and augmented reality devices, which enables annotation and visualization in three-dimensional virtual environments. In addition, multimodal objects are annotated and visualized within a graph-based approach.",
    isbn="978-3-030-30712-7",
    pdf ={https://link.springer.com/content/pdf/10.1007%2F978-3-030-30712-7_44.pdf},
      year           = 2019
    }

2018 (1)

  • [PDF] C. Spiekermann, G. Abrami, and A. Mehler, “VAnnotatoR: a Gesture-driven Annotation Framework for Linguistic and Multimodal Annotation,” in Proceedings of the Annotation, Recognition and Evaluation of Actions (AREA 2018) Workshop, 2018.
    [BibTeX]

    @InProceedings{Spiekerman:Abrami:Mehler:2018,
      Author         = {Christian Spiekermann and Giuseppe Abrami and
                       Alexander Mehler},
      Title          = {{VAnnotatoR}: a Gesture-driven Annotation Framework
                       for Linguistic and Multimodal Annotation},
      BookTitle      = {Proceedings of the Annotation, Recognition and
                       Evaluation of Actions (AREA 2018) Workshop},
      Series         = {AREA},
      location       = {Miyazaki, Japan},
      pdf            = {https://www.texttechnologylab.org/wp-content/uploads/2018/03/VAnnotatoR.pdf},
      year           = 2018
    }