VAnnotatoR

The VAnnotatoR allows the annotation of multimodal networks in virtual 3D environments.

In this way, networked texts and text passages can be linked with multimodal content that is also networked with each other, in order to generate multimodal and spatial hypertexts.
This content can be used for concrete technical applications, e.g.: to reconstruct 3D scenes from texts and thus train Text2Scene systems, or for applications in digital humanities, such as the spatial reconstruction and linking of (historical) life paths with appropriate media content, such as images or reproductions of historical buildings.
The entire system can be visualized, edited and experienced in Virtual Reality (VR) with the help of suitable VR glasses.

The VAnnotatoR uses the TextAnnotator as its technological backbone and thus has all the technical functions for annotation, task management, annotation evaluation, and collaborative and simultaneous processing of resources. The VAnnotatoR itself is implemented on Unity3D und OpenVR and therefore runs independently with both system and VR glasses (it has so far been tested on the Oculus Rift, Oculus Rift S, Oculus Quest, HTC Vive and HTC Vive Cosmos under Windows 10).

Left: Abrami, Spiekermann, and Mehler: (2019); Right: Abrami et al.: (2020)

About VAnnotatoRUsage VAnnotatoR

Total: 7

2020 (2)

  • [PDF] [https://doi.org/10.1145/3372923.3404791] [DOI] G. Abrami, A. Henlein, A. Kett, and A. Mehler, “Text2SceneVR: Generating Hypertexts with VAnnotatoR as a Pre-processing Step for Text2Scene Systems,” in Proceedings of the 31st ACM Conference on Hypertext and Social Media, New York, NY, USA, 2020, p. 177–186.
    [BibTeX]

    @InProceedings{Abrami:Henlein:Kett:Mehler:2020,
        author = {Abrami, Giuseppe and Henlein, Alexander and Kett, Attila and Mehler, Alexander},
        title = {{Text2SceneVR}: Generating Hypertexts with VAnnotatoR as a Pre-processing Step for Text2Scene Systems},
        booktitle = {Proceedings of the 31st ACM Conference on Hypertext and Social Media},
        series = {HT ’20}, 
        year = {2020},
        location = {Virtual Event, USA}, 
        isbn = {9781450370981},
        publisher = {Association for Computing Machinery},
        address = {New York, NY, USA},
        url = {https://doi.org/10.1145/3372923.3404791},
        doi = {10.1145/3372923.3404791},
        pages = {177–186},
        numpages = {10},
        pdf={https://dl.acm.org/doi/pdf/10.1145/3372923.3404791}
    }
  • [PDF] [https://www.aclweb.org/anthology/2020.isa-1.4] A. Henlein, G. Abrami, A. Kett, and A. Mehler, “Transfer of ISOSpace into a 3D Environment for Annotations and Applications,” in Proceedings of the 16th Joint ACL – ISO Workshop on Interoperable Semantic Annotation, Marseille, 2020, pp. 32-35.
    [Abstract] [BibTeX]

    People's visual perception is very pronounced and therefore it is usually no problem for them to describe the space around them in words. Conversely, people also have no problems imagining a concept of a described space. In recent years many efforts have been made to develop a linguistic concept for spatial and spatial-temporal relations. However, the systems have not really caught on so far, which in our opinion is due to the complex models on which they are based and the lack of available training data and automated taggers. In this paper we describe a project to support spatial annotation, which could facilitate annotation by its many functions, but also enrich it with many more information. This is to be achieved by an extension by means of a VR environment, with which spatial relations can be better visualized and connected with real objects. And we want to use the available data to develop a new state-of-the-art tagger and thus lay the foundation for future systems such as improved text understanding for Text2Scene.
    @InProceedings{Henlein:et:al:2020,
      Author         = {Henlein, Alexander and Abrami, Giuseppe and Kett, Attila and Mehler, Alexander},
      Title          = {Transfer of ISOSpace into a 3D Environment for Annotations and Applications},
      booktitle      = {Proceedings of the 16th Joint ACL - ISO Workshop on Interoperable Semantic Annotation},
      month          = {May},
      year           = {2020},
      address        = {Marseille},
      publisher      = {European Language Resources Association},
      pages     = {32--35},
      abstract  = {People's visual perception is very pronounced and therefore it is usually no problem for them to describe the space around them in words. Conversely, people also have no problems imagining a concept of a described space. In recent years many efforts have been made to develop a linguistic concept for spatial and spatial-temporal relations. However, the systems have not really caught on so far, which in our opinion is due to the complex models on which they are based and the lack of available training data and automated taggers. In this paper we describe a project to support spatial annotation, which could facilitate annotation by its many functions, but also enrich it with many more information. This is to be achieved by an extension by means of a VR environment, with which spatial relations can be better visualized and connected with real objects. And we want to use the available data to develop a new state-of-the-art tagger and thus lay the foundation for future systems such as improved text understanding for Text2Scene.},
      url       = {https://www.aclweb.org/anthology/2020.isa-1.4},
      pdf      = {http://www.lrec-conf.org/proceedings/lrec2020/workshops/ISA16/pdf/2020.isa-1.4.pdf}
    }

2019 (3)

  • A. Mehler and G. Abrami, “VAnnotatoR: A framework for the multimodal reconstruction of historical situations and spaces,” in Proceedings of the Time Machine Conference, Dresden, Germany, October 10-11 2019.
    [Poster][BibTeX]

    @inproceedings{Mehler:Abrami:2019,
        author = {Mehler, Alexander and Abrami, Giuseppe},
        title = {{VAnnotatoR}: A framework for the multimodal reconstruction of historical situations and spaces},
        booktitle = {Proceedings of the Time Machine Conference},
        year = {2019},
        date = {October 10-11},
        address = {Dresden, Germany},
        poster={https://www.texttechnologylab.org/wp-content/uploads/2019/09/TimeMachineConference.pdf}
    }
  • [PDF] G. Abrami, A. Mehler, and C. Spiekermann, “Graph-based Format for Modeling Multimodal Annotations in Virtual Reality by Means of VAnnotatoR,” in Proceedings of the 21th International Conference on Human-Computer Interaction, HCII 2019, Cham, 2019, pp. 351-358.
    [Abstract] [BibTeX]

    Projects in the field of Natural Language Processing (NLP), the Digital Humanities (DH) and related disciplines dealing with machine learning of complex relationships between data objects need annotations to obtain sufficiently rich training and test sets. The visualization of such data sets and their underlying Human Computer Interaction (HCI) are perennial problems of computer science. However, despite some success stories, the clarity of information presentation and the flexibility of the annotation process may decrease with the complexity of the underlying data objects and their relationships. In order to face this problem, the so-called VAnnotatoR was developed, as a flexible annotation tool using 3D glasses and augmented reality devices, which enables annotation and visualization in three-dimensional virtual environments. In addition, multimodal objects are annotated and visualized within a graph-based approach.
    @InProceedings{Abrami:Mehler:Spiekermann:2019,
      Author         = {Abrami, Giuseppe and Mehler, Alexander and Spiekermann, Christian},
      Title          = {{Graph-based Format for Modeling Multimodal Annotations in Virtual Reality by Means of VAnnotatoR}},
      BookTitle      = {Proceedings of the 21th International Conference on Human-Computer Interaction, HCII 2019},
      Series         = {HCII 2019},
      location       = {Orlando, Florida, USA},
      editor   = {Stephanidis, Constantine and Antona, Margherita},
      month     = {July},
    publisher="Springer International Publishing",
    address="Cham",
    pages="351--358",
    abstract="Projects in the field of Natural Language Processing (NLP), the Digital Humanities (DH) and related disciplines dealing with machine learning of complex relationships between data objects need annotations to obtain sufficiently rich training and test sets. The visualization of such data sets and their underlying Human Computer Interaction (HCI) are perennial problems of computer science. However, despite some success stories, the clarity of information presentation and the flexibility of the annotation process may decrease with the complexity of the underlying data objects and their relationships. In order to face this problem, the so-called VAnnotatoR was developed, as a flexible annotation tool using 3D glasses and augmented reality devices, which enables annotation and visualization in three-dimensional virtual environments. In addition, multimodal objects are annotated and visualized within a graph-based approach.",
    isbn="978-3-030-30712-7",
    pdf ={https://link.springer.com/content/pdf/10.1007%2F978-3-030-30712-7_44.pdf},
      year           = 2019
    }
  • [PDF] G. Abrami, C. Spiekermann, and A. Mehler, “VAnnotatoR: Ein Werkzeug zur Annotation multimodaler Netzwerke in dreidimensionalen virtuellen Umgebungen,” in Proceedings of the 6th Digital Humanities Conference in the German-speaking Countries, DHd 2019, 2019.
    [Poster][BibTeX]

    @InProceedings{Abrami:Spiekermann:Mehler:2019,
      Author         = {Abrami, Giuseppe and Spiekermann, Christian and Mehler, Alexander},
      Title          = {{VAnnotatoR: Ein Werkzeug zur Annotation multimodaler Netzwerke in dreidimensionalen virtuellen Umgebungen}},
      BookTitle      = {Proceedings of the 6th Digital Humanities Conference in the German-speaking Countries, DHd 2019},
      Series   = {DHd 2019}, 
      pdf     = {https://www.texttechnologylab.org/wp-content/uploads/2019/04/Preprint_VAnnotatoR_DHd2019.pdf},
      poster   = {https://www.texttechnologylab.org/wp-content/uploads/2019/04/DHDVAnnotatoRPoster.pdf},  
    location       = {Frankfurt, Germany},
      year           = 2019
    }

2018 (2)

  • [PDF] A. Mehler, G. Abrami, C. Spiekermann, and M. Jostock, “VAnnotatoR: A Framework for Generating Multimodal Hypertexts,” in Proceedings of the 29th ACM Conference on Hypertext and Social Media, New York, NY, USA, 2018.
    [BibTeX]

    @InProceedings{Mehler:Abrami:Spiekermann:Jostock:2018,
        author = {Mehler, Alexander and Abrami, Giuseppe and Spiekermann, Christian and Jostock, Matthias},
        title = {{VAnnotatoR}: {A} Framework for Generating Multimodal Hypertexts},
        booktitle = {Proceedings of the 29th ACM Conference on Hypertext and Social Media},
        series = {Proceedings of the 29th ACM Conference on Hypertext and Social Media (HT '18)},
        year = {2018},
        location = {Baltimore, Maryland},
        publisher = {ACM},
        address = {New York, NY, USA},
        pdf = {http://delivery.acm.org/10.1145/3210000/3209572/p150-mehler.pdf}
    }
  • [PDF] C. Spiekermann, G. Abrami, and A. Mehler, “VAnnotatoR: a Gesture-driven Annotation Framework for Linguistic and Multimodal Annotation,” in Proceedings of the Annotation, Recognition and Evaluation of Actions (AREA 2018) Workshop, 2018.
    [BibTeX]

    @InProceedings{Spiekerman:Abrami:Mehler:2018,
      Author         = {Christian Spiekermann and Giuseppe Abrami and
                       Alexander Mehler},
      Title          = {{VAnnotatoR}: a Gesture-driven Annotation Framework
                       for Linguistic and Multimodal Annotation},
      BookTitle      = {Proceedings of the Annotation, Recognition and
                       Evaluation of Actions (AREA 2018) Workshop},
      Series         = {AREA},
      location       = {Miyazaki, Japan},
      pdf            = {https://www.texttechnologylab.org/wp-content/uploads/2018/03/VAnnotatoR.pdf},
      year           = 2018
    }

Total: 2

2019 (1)

  • [PDF] G. Abrami, A. Mehler, and C. Spiekermann, “Graph-based Format for Modeling Multimodal Annotations in Virtual Reality by Means of VAnnotatoR,” in Proceedings of the 21th International Conference on Human-Computer Interaction, HCII 2019, Cham, 2019, pp. 351-358.
    [Abstract] [BibTeX]

    Projects in the field of Natural Language Processing (NLP), the Digital Humanities (DH) and related disciplines dealing with machine learning of complex relationships between data objects need annotations to obtain sufficiently rich training and test sets. The visualization of such data sets and their underlying Human Computer Interaction (HCI) are perennial problems of computer science. However, despite some success stories, the clarity of information presentation and the flexibility of the annotation process may decrease with the complexity of the underlying data objects and their relationships. In order to face this problem, the so-called VAnnotatoR was developed, as a flexible annotation tool using 3D glasses and augmented reality devices, which enables annotation and visualization in three-dimensional virtual environments. In addition, multimodal objects are annotated and visualized within a graph-based approach.
    @InProceedings{Abrami:Mehler:Spiekermann:2019,
      Author         = {Abrami, Giuseppe and Mehler, Alexander and Spiekermann, Christian},
      Title          = {{Graph-based Format for Modeling Multimodal Annotations in Virtual Reality by Means of VAnnotatoR}},
      BookTitle      = {Proceedings of the 21th International Conference on Human-Computer Interaction, HCII 2019},
      Series         = {HCII 2019},
      location       = {Orlando, Florida, USA},
      editor   = {Stephanidis, Constantine and Antona, Margherita},
      month     = {July},
    publisher="Springer International Publishing",
    address="Cham",
    pages="351--358",
    abstract="Projects in the field of Natural Language Processing (NLP), the Digital Humanities (DH) and related disciplines dealing with machine learning of complex relationships between data objects need annotations to obtain sufficiently rich training and test sets. The visualization of such data sets and their underlying Human Computer Interaction (HCI) are perennial problems of computer science. However, despite some success stories, the clarity of information presentation and the flexibility of the annotation process may decrease with the complexity of the underlying data objects and their relationships. In order to face this problem, the so-called VAnnotatoR was developed, as a flexible annotation tool using 3D glasses and augmented reality devices, which enables annotation and visualization in three-dimensional virtual environments. In addition, multimodal objects are annotated and visualized within a graph-based approach.",
    isbn="978-3-030-30712-7",
    pdf ={https://link.springer.com/content/pdf/10.1007%2F978-3-030-30712-7_44.pdf},
      year           = 2019
    }

2018 (1)

  • [PDF] C. Spiekermann, G. Abrami, and A. Mehler, “VAnnotatoR: a Gesture-driven Annotation Framework for Linguistic and Multimodal Annotation,” in Proceedings of the Annotation, Recognition and Evaluation of Actions (AREA 2018) Workshop, 2018.
    [BibTeX]

    @InProceedings{Spiekerman:Abrami:Mehler:2018,
      Author         = {Christian Spiekermann and Giuseppe Abrami and
                       Alexander Mehler},
      Title          = {{VAnnotatoR}: a Gesture-driven Annotation Framework
                       for Linguistic and Multimodal Annotation},
      BookTitle      = {Proceedings of the Annotation, Recognition and
                       Evaluation of Actions (AREA 2018) Workshop},
      Series         = {AREA},
      location       = {Miyazaki, Japan},
      pdf            = {https://www.texttechnologylab.org/wp-content/uploads/2018/03/VAnnotatoR.pdf},
      year           = 2018
    }