Wahed Hemati

Wahed Hemati

Wahed Hemati
M.Sc. Computer Science

Staff member

 

 

 

 

ContactPublications

Total: 16

2018 (8)

  • E. Rutherford, W. Hemati, and A. Mehler, “Corpus2Wiki: A MediaWiki based Annotation & Visualisation Tool for the Digital Humanities,” in INF-DH-2018, Bonn, 2018. accepted
    [BibTeX]

    @inproceedings{Rutherford:Hemati:Mehler:2018,
    author = {Rutherford, Eleanor and Hemati, Wahed and Mehler, Alexander},
    title = {{Corpus2Wiki}: A MediaWiki based Annotation & Visualisation Tool for the Digital Humanities},
    booktitle = {INF-DH-2018},
    year = {2018},
    note = {accepted},
    editor = {Burghardt, Manuel AND Müller-Birn, Claudia},
    publisher = {Gesellschaft für Informatik e.V.},
    address = {Bonn}
    }
  • [https://doi.org/10.3897/biss.2.25876] [DOI] C. Driller, M. Koch, M. Schmidt, C. Weiland, T. Hörnschemeyer, T. Hickler, G. Abrami, S. Ahmed, R. Gleim, W. Hemati, T. Uslu, A. Mehler, A. Pachzelt, J. Rexhepi, T. Risse, J. Schuster, G. Kasperek, and A. Hausinger, “Workflow and Current Achievements of BIOfid, an Information Service Mobilizing Biodiversity Data from Literature Sources,” Biodiversity Information Science and Standards, vol. 2, p. e25876, 2018.
    [Abstract] [BibTeX]

    BIOfid is a specialized information service currently being developed to mobilize biodiversity data dormant in printed historical and modern literature and to offer a platform for open access journals on the science of biodiversity. Our team of librarians, computer scientists and biologists produce high-quality text digitizations, develop new text-mining tools and generate detailed ontologies enabling semantic text analysis and semantic search by means of user-specific queries. In a pilot project we focus on German publications on the distribution and ecology of vascular plants, birds, moths and butterflies extending back to the Linnaeus period about 250 years ago. The three organism groups have been selected according to current demands of the relevant research community in Germany. The text corpus defined for this purpose comprises over 400 volumes with more than 100,000 pages to be digitized and will be complemented by journals from other digitization projects, copyright-free and project-related literature. With TextImager (Natural Language Processing & Text Visualization) and TextAnnotator (Discourse Semantic Annotation) we have already extended and launched tools that focus on the text-analytical section of our project. Furthermore, taxonomic and anatomical ontologies elaborated by us for the taxa prioritized by the project’s target group - German institutions and scientists active in biodiversity research - are constantly improved and expanded to maximize scientific data output. Our poster describes the general workflow of our project ranging from literature acquisition via software development, to data availability on the BIOfid web portal (http://biofid.de/), and the implementation into existing platforms which serve to promote global accessibility of biodiversity data.
    @article{Driller:et:al:2018,
            author = {Christine Driller and Markus Koch and Marco Schmidt and Claus Weiland and Thomas Hörnschemeyer and Thomas Hickler and Giuseppe Abrami and Sajawel Ahmed and Rüdiger Gleim and Wahed Hemati and Tolga Uslu and Alexander Mehler and Adrian Pachzelt and Jashar Rexhepi and Thomas Risse and Janina Schuster and Gerwin Kasperek and Angela Hausinger},
            title = {Workflow and Current Achievements of BIOfid, an Information Service Mobilizing Biodiversity Data from Literature Sources},
            volume = {2},
            number = {},
            year = {2018},
            doi = {10.3897/biss.2.25876},
            publisher = {Pensoft Publishers},
            abstract = {BIOfid is a specialized information service currently being developed to mobilize biodiversity data dormant in printed historical and modern literature and to offer a platform for open access journals on the science of biodiversity. Our team of librarians, computer scientists and biologists produce high-quality text digitizations, develop new text-mining tools and generate detailed ontologies enabling semantic text analysis and semantic search by means of user-specific queries. In a pilot project we focus on German publications on the distribution and ecology of vascular plants, birds, moths and butterflies extending back to the Linnaeus period about 250 years ago. The three organism groups have been selected according to current demands of the relevant research community in Germany. The text corpus defined for this purpose comprises over 400 volumes with more than 100,000 pages to be digitized and will be complemented by journals from other digitization projects, copyright-free and project-related literature. With TextImager (Natural Language Processing & Text Visualization) and TextAnnotator (Discourse Semantic Annotation) we have already extended and launched tools that focus on the text-analytical section of our project. Furthermore, taxonomic and anatomical ontologies elaborated by us for the taxa prioritized by the project’s target group - German institutions and scientists active in biodiversity research - are constantly improved and expanded to maximize scientific data output. Our poster describes the general workflow of our project ranging from literature acquisition via software development, to data availability on the BIOfid web portal (http://biofid.de/), and the implementation into existing platforms which serve to promote global accessibility of biodiversity data.},
            issn = {},
            pages = {e25876},
            URL = {https://doi.org/10.3897/biss.2.25876},
            eprint = {https://doi.org/10.3897/biss.2.25876},
            journal = {Biodiversity Information Science and Standards}
    }
  • [PDF] W. Hemati, A. Mehler, T. Uslu, D. Baumartz, and G. Abrami, “Evaluating and Integrating Databases in the Area of NLP,” in International Quantitative Linguistics Conference (QUALICO 2018), 2018.
    [Poster][BibTeX]

    @inproceedings{Hemati:Mehler:Uslu:Baumartz:Abrami:2018,
        author={Wahed Hemati and Alexander Mehler and Tolga Uslu and Daniel Baumartz and Giuseppe Abrami},
        title={Evaluating and Integrating Databases in the Area of {NLP}},
        booktitle={International Quantitative Linguistics Conference (QUALICO 2018)},
        year={2018},
        pdf={https://www.texttechnologylab.org/wp-content/uploads/2018/04/Hemat-Mehler-Uslu-Baumartz-Abrami-Qualico-2018.pdf},
        poster={https://www.texttechnologylab.org/wp-content/uploads/2018/10/qualico2018_databases_poster_hemati_mehler_uslu_baumartz_abrami.pdf},
        location={Wroclaw, Poland}
    }
  • A. Mehler, W. Hemati, R. Gleim, and D. Baumartz, “VienNA: Auf dem Weg zu einer Infrastruktur für die verteilte interaktive evolutionäre Verarbeitung natürlicher Sprache,” in Forschungsinfrastrukturen und digitale Informationssysteme in der germanistischen Sprachwissenschaft , H. Lobin, R. Schneider, and A. Witt, Eds., Berlin: De Gruyter, 2018, vol. 6.
    [BibTeX]

    @InCollection{Mehler:Hemati:Gleim:Baumartz:2018,
      Author         = {Alexander Mehler and Wahed Hemati and Rüdiger Gleim
                       and Daniel Baumartz},
      Title          = {{VienNA: }{Auf dem Weg zu einer Infrastruktur für die verteilte
                       interaktive evolutionäre Verarbeitung natürlicher
                       Sprache}},
      BookTitle      = {Forschungsinfrastrukturen und digitale
                       Informationssysteme in der germanistischen
                       Sprachwissenschaft },
      Publisher      = {De Gruyter},
      Editor         = {Henning Lobin and Roman Schneider and Andreas Witt},
      Volume         = {6},
      Address        = {Berlin},
      year           = 2018
    }
  • A. Mehler, W. Hemati, T. Uslu, and A. Lücking, “A Multidimensional Model of Syntactic Dependency Trees for Authorship Attribution,” in Quantitative analysis of dependency structures, J. Jiang and H. Liu, Eds., Berlin/New York: De Gruyter, 2018.
    [Abstract] [BibTeX]

    Abstract: In this chapter we introduce a multidimensional model of syntactic dependency trees. Our ultimate goal is to generate fingerprints of such trees to predict the author of the underlying sentences. The chapter makes a first attempt to create such fingerprints for sentence categorization via the detour of text categorization. We show that at text level, aggregated dependency structures actually provide information about authorship. At the same time, we show that this does not hold for topic detection. We evaluate our model using a quarter of a million sentences collected in two corpora: the first is sampled from literary texts, the second from Wikipedia articles. As a second finding of our approach, we show that quantitative models of dependency structure do not yet allow for detecting syntactic alignment in written communication. We conclude that this is mainly due to effects of lexical alignment on syntactic alignment.
    @InCollection{Mehler:Hemati:Uslu:Luecking:2018,
      Author         = {Alexander Mehler and Wahed Hemati and Tolga Uslu and
                       Andy Lücking},
      Title          = {A Multidimensional Model of Syntactic Dependency Trees
                       for Authorship Attribution},
      BookTitle      = {Quantitative analysis of dependency structures},
      Publisher      = {De Gruyter},
      Editor         = {Jingyang Jiang and Haitao Liu},
      Address        = {Berlin/New York},
      abstract       = {Abstract: In this chapter we introduce a
    multidimensional model of syntactic dependency trees.
    Our ultimate goal is to generate fingerprints of such
    trees to predict the author of the underlying
    sentences. The chapter makes a first attempt to create
    such fingerprints for sentence categorization via the
    detour of text categorization. We show that at text
    level, aggregated dependency structures actually
    provide information about authorship. At the same time,
    we show that this does not hold for topic detection. We
    evaluate our model using a quarter of a million
    sentences collected in two corpora: the first is
    sampled from literary texts, the second from Wikipedia
    articles. As a second finding of our approach, we show
    that quantitative models of dependency structure do not
    yet allow for detecting syntactic alignment in written
    communication. We conclude that this is mainly due to
    effects of lexical alignment on syntactic alignment.},
      keywords       = {Dependency structure, Authorship attribution, Text
                       categorization, Syntactic Alignment},
      year           = 2018
    }
  • T. Uslu, L. Miebach, S. Wolfsgruber, M. Wagner, K. Fließbach, R. Gleim, W. Hemati, A. Henlein, and A. Mehler, “Automatic Classification in Memory Clinic Patients and in Depressive Patients,” in Proceedings of Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric impairments (RaPID-2), 2018.
    [BibTeX]

    @InProceedings{Uslu:et:al:2018:a,
      Author         = {Tolga Uslu and Lisa Miebach and Steffen Wolfsgruber
                       and Michael Wagner and Klaus Fließbach and Rüdiger
                       Gleim and Wahed Hemati and Alexander Henlein and
                       Alexander Mehler},
      Title          = {{Automatic Classification in Memory Clinic Patients
                       and in Depressive Patients}},
      BookTitle      = {Proceedings of Resources and ProcessIng of linguistic,
                       para-linguistic and extra-linguistic Data from people
                       with various forms of cognitive/psychiatric impairments
                       (RaPID-2)},
      Series         = {RaPID},
      location       = {Miyazaki, Japan},
      year           = 2018
    }
  • [PDF] T. Uslu, A. Mehler, D. Baumartz, A. Henlein, and W. Hemati, “fastSense: An Efficient Word Sense Disambiguation Classifier,” in Proceedings of the 11th edition of the Language Resources and Evaluation Conference, May 7 – 12, Miyazaki, Japan, 2018.
    [BibTeX]

    @InProceedings{Uslu:et:al:2018,
      Author         = {Tolga Uslu and Alexander Mehler and Daniel Baumartz
                       and Alexander Henlein and Wahed Hemati },
      Title          = {fastSense: An Efficient Word Sense Disambiguation
                       Classifier},
      BookTitle      = {Proceedings of the 11th edition of the Language
                       Resources and Evaluation Conference, May 7 - 12},
      Series         = {LREC 2018},
      Address        = {Miyazaki, Japan},
      pdf            = {https://www.texttechnologylab.org/wp-content/uploads/2018/03/fastSense.pdf},
      year           = 2018
    }
  • G. Abrami, S. Ahmed, R. Gleim, W. Hemati, A. Mehler, and U. Tolga, Natural Language Processing and Text Mining for BIOfid, 2018.
    [BibTeX]

    @misc{Abrami:et:al:2018b,
     author = {Abrami, Giuseppe and Ahmed, Sajawel and Gleim, R{\"u}diger and Hemati, Wahed and Mehler, Alexander and Uslu Tolga},
     title = {{Natural Language Processing and Text Mining for BIOfid}},
     howpublished = {Presentation at the 1st Meeting of the Scientific Advisory Board of the BIOfid Project},
     adress = {Goethe-University, Frankfurt am Main, Germany},
     year = {2018},
     month = {March},
     day = {08},
     pdf = {}
    }

2017 (5)

  • [PDF] W. Hemati, A. Mehler, and T. Uslu, “CRFVoter: Chemical Entity Mention, Gene and Protein Related Object recognition using a conglomerate of CRF based tools,” in BioCreative V.5. Proceedings, 2017.
    [BibTeX]

    @InProceedings{Hemati:Mehler:Uslu:2017,
      Author         = {Wahed Hemati and Alexander Mehler and Tolga Uslu},
      Title          = {{CRFVoter}: Chemical Entity Mention, Gene and Protein
                       Related Object recognition using a conglomerate of CRF
                       based tools},
      BookTitle      = {BioCreative V.5. Proceedings},
      pdf            = {https://www.texttechnologylab.org/wp-content/uploads/2018/03/CRFVoter.pdf},
      year           = 2017
    }
  • [PDF] W. Hemati, T. Uslu, and A. Mehler, “TextImager as an interface to BeCalm,” in BioCreative V.5. Proceedings, 2017.
    [BibTeX]

    @InProceedings{Hemati:Uslu:Mehler:2017,
      Author         = {Wahed Hemati and Tolga Uslu and Alexander Mehler},
      Title          = {{TextImager} as an interface to {BeCalm}},
      BookTitle      = {BioCreative V.5. Proceedings},
      pdf            = {https://www.texttechnologylab.org/wp-content/uploads/2018/03/TextImager_BeCalm.pdf},
      year           = 2017
    }
  • A. Mehler, R. Gleim, W. Hemati, and T. Uslu, “Skalenfreie online soziale Lexika am Beispiel von Wiktionary,” in Proceedings of 53rd Annual Conference of the Institut für Deutsche Sprache (IDS), March 14-16, Mannheim, Germany, Berlin, 2017. In German. Title translates into: Scale-free online-social Lexika by Example of Wiktionary
    [Abstract] [BibTeX]

    In English: The paper deals with characteristics of the structural, thematic and participatory dynamics of collaboratively generated lexical networks. This is done by example of Wiktionary. Starting from a network-theoretical model in terms of so-called multi-layer networks, we describe Wiktionary as a scale-free lexicon. Systems of this sort are characterized by the fact that their content-related dynamics is determined by the underlying dynamics of collaborating authors. This happens in a way that social structure imprints on content structure. According to this conception, the unequal distribution of the activities of authors results in a correspondingly unequal distribution of the information units documented within the lexicon. The paper focuses on foundations for describing such systems starting from a parameter space which requires to deal with Wiktionary as an issue in big data analysis.  In German: Der Beitrag thematisiert Eigenschaften der strukturellen, thematischen und partizipativen Dynamik kollaborativ erzeugter lexikalischer Netzwerke am Beispiel von Wiktionary. Ausgehend von einem netzwerktheoretischen Modell in Form so genannter Mehrebenennetzwerke wird Wiktionary als ein skalenfreies Lexikon beschrieben. Systeme dieser Art zeichnen sich dadurch aus, dass ihre inhaltliche Dynamik durch die zugrundeliegende Kollaborationsdynamik bestimmt wird, und zwar so, dass sich die soziale Struktur der entsprechenden inhaltlichen Struktur aufprägt. Dieser Auffassung gemäß führt die Ungleichverteilung der Aktivitäten von Lexikonproduzenten zu einer analogen Ungleichverteilung der im Lexikon dokumentierten Informationseinheiten. Der Beitrag thematisiert Grundlagen zur Beschreibung solcher Systeme ausgehend von einem Parameterraum, welcher die netzwerkanalytische Betrachtung von Wiktionary als Big-Data-Problem darstellt.
    @InProceedings{Mehler:Gleim:Hemati:Uslu:2017,
      Author         = {Alexander Mehler and Rüdiger Gleim and Wahed Hemati
                       and Tolga Uslu},
      Title          = {{Skalenfreie online soziale Lexika am Beispiel von
                       Wiktionary}},
      BookTitle      = {Proceedings of 53rd Annual Conference of the Institut
                       für Deutsche Sprache (IDS), March 14-16, Mannheim,
                       Germany},
      Editor         = {Stefan Engelberg and Henning Lobin and Kathrin Steyer
                       and Sascha Wolfer},
      Address        = {Berlin},
      Publisher      = {De Gruyter},
      Note           = {In German. Title translates into: Scale-free
                       online-social Lexika by Example of Wiktionary},
      abstract       = {In English: The paper deals with characteristics of
    the structural, thematic and participatory dynamics of
    collaboratively generated lexical networks. This is
    done by example of Wiktionary. Starting from a
    network-theoretical model in terms of so-called
    multi-layer networks, we describe Wiktionary as a
    scale-free lexicon. Systems of this sort are
    characterized by the fact that their content-related
    dynamics is determined by the underlying dynamics of
    collaborating authors. This happens in a way that
    social structure imprints on content structure.
    According to this conception, the unequal distribution
    of the activities of authors results in a
    correspondingly unequal distribution of the information
    units documented within the lexicon. The paper focuses
    on foundations for describing such systems starting
    from a parameter space which requires to deal with
    Wiktionary as an issue in big data analysis. 
    In German:
    Der Beitrag thematisiert Eigenschaften der
    strukturellen, thematischen und partizipativen Dynamik
    kollaborativ erzeugter lexikalischer Netzwerke am
    Beispiel von Wiktionary. Ausgehend von einem
    netzwerktheoretischen Modell in Form so genannter
    Mehrebenennetzwerke wird Wiktionary als ein
    skalenfreies Lexikon beschrieben. Systeme dieser Art
    zeichnen sich dadurch aus, dass ihre inhaltliche
    Dynamik durch die zugrundeliegende
    Kollaborationsdynamik bestimmt wird, und zwar so, dass
    sich die soziale Struktur der entsprechenden
    inhaltlichen Struktur aufprägt. Dieser Auffassung
    gemäß führt die Ungleichverteilung der Aktivitäten
    von Lexikonproduzenten zu einer analogen
    Ungleichverteilung der im Lexikon dokumentierten
    Informationseinheiten. Der Beitrag thematisiert
    Grundlagen zur Beschreibung solcher Systeme ausgehend
    von einem Parameterraum, welcher die
    netzwerkanalytische Betrachtung von Wiktionary als
    Big-Data-Problem darstellt.},
      year           = 2017
    }
  • A. Mehler, O. Zlatkin-Troitschanskaia, W. Hemati, D. Molerov, A. Lücking, and S. Schmidt, “Integrating Computational Linguistic Analysis of Multilingual Learning Data and Educational Measurement Approaches to Explore Student Learning in Higher Education,” in Positive Learning in the Age of Information (PLATO) — A blessing or a curse?, O. Zlatkin-Troitschanskaia, G. Wittum, and A. Dengel, Eds., Wiesbaden: Springer, 2017.
    [Abstract] [BibTeX]

    This chapter develops a computational linguistic model for analyzing and comparing multilingual data as well as its application to a large body of standardized assessment data from higher education. The approach employs both an automatic and a manual annotation of the data on several linguistic layers (including parts of speech, text structure and content). Quantitative features of the textual data are explored that are related to both the students’ (domain-specific knowledge) test results and their level of academic experience. The respective analysis involves statistics of distance correlation, text categorization with respect to text types (questions and distractors) as well as languages (English and German), and network analysis as a means to assess dependencies between features. The results indicate a correlation between correct test results of students and linguistic features of the verbal presentations of tests indicating a language influence on higher education test performance. It is also found that this influence relates to special language. Thus, this integrative modeling approach contributes a test basis for a large-scale analysis of learning data and points to a number of subsequent more detailed research.
    @InCollection{Mehler:Zlatkin-Troitschanskaia:Hemati:Molerov:Luecking:Schmidt:2017,
      Author         = {Alexander Mehler and Olga Zlatkin-Troitschanskaia and
                       Wahed Hemati and Dimitri Molerov and Andy Lücking and
                       Susanne Schmidt},
      Title          = {Integrating Computational Linguistic Analysis of
                       Multilingual Learning Data and Educational Measurement
                       Approaches to Explore Student Learning in Higher
                       Education},
      BookTitle      = {Positive Learning in the Age of Information ({PLATO})
                       -- A blessing or a curse?},
      Publisher      = {Springer},
      Editor         = {Zlatkin-Troitschanskaia, Olga and Wittum, Gabriel and
                       Dengel, Andreas},
      Address        = {Wiesbaden},
      abstract       = {This chapter develops a computational linguistic model
    for analyzing and comparing multilingual data as well
    as its application to a large body of standardized
    assessment data from higher education. The approach
    employs both an automatic and a manual annotation of
    the data on several linguistic layers (including parts
    of speech, text structure and content). Quantitative
    features of the textual data are explored that are
    related to both the students’ (domain-specific
    knowledge) test results and their level of academic
    experience. The respective analysis involves statistics
    of distance correlation, text categorization with
    respect to text types (questions and distractors) as
    well as languages (English and German), and network
    analysis as a means to assess dependencies between
    features. The results indicate a correlation between
    correct test results of students and linguistic
    features of the verbal presentations of tests
    indicating a language influence on higher education
    test performance. It is also found that this influence
    relates to special language. Thus, this integrative
    modeling approach contributes a test basis for a
    large-scale analysis of learning data and points to a
    number of subsequent more detailed research.},
      year           = 2017
    }
  • [PDF] T. Uslu, W. Hemati, A. Mehler, and D. Baumartz, “TextImager as a Generic Interface to R,” in Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), 2017.
    [BibTeX]

    @InProceedings{Uslu:Hemati:Mehler:Baumartz:2017,
      Author         = {Tolga Uslu and Wahed Hemati and Alexander Mehler and
                       Daniel Baumartz},
      Title          = {{TextImager} as a Generic Interface to {R}},
      BookTitle      = {Software Demonstrations of the 15th Conference of the
                       European Chapter of the Association for Computational
                       Linguistics (EACL 2017)},
      location       = {Valencia, Spain},
      pdf            = {https://www.texttechnologylab.org/wp-content/uploads/2018/03/TextImager.pdf},
      year           = 2017
    }

2016 (3)

  • [PDF] W. Hemati, T. Uslu, and A. Mehler, “TextImager: a Distributed UIMA-based System for NLP,” in Proceedings of the COLING 2016 System Demonstrations, 2016.
    [BibTeX]

    @InProceedings{Hemati:Uslu:Mehler:2016,
      Author         = {Wahed Hemati and Tolga Uslu and Alexander Mehler},
      Title          = {TextImager: a Distributed UIMA-based System for NLP},
      BookTitle      = {Proceedings of the COLING 2016 System Demonstrations},
      Organization   = {Federated Conference on Computer Science and
                       Information Systems},
      location       = {Osaka, Japan},
      pdf            = {https://www.texttechnologylab.org/wp-content/uploads/2018/03/TextImager2016.pdf},
      year           = 2016
    }
  • [PDF] A. Mehler, T. Uslu, and W. Hemati, “Text2voronoi: An Image-driven Approach to Differential Diagnosis,” in Proceedings of the 5th Workshop on Vision and Language (VL’16) hosted by the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, 2016.
    [BibTeX]

    @InProceedings{Mehler:Uslu:Hemati:2016,
      Author         = {Alexander Mehler and Tolga Uslu and Wahed Hemati},
      Title          = {Text2voronoi: An Image-driven Approach to Differential
                       Diagnosis},
      BookTitle      = {Proceedings of the 5th Workshop on Vision and Language
                       (VL'16) hosted by the 54th Annual Meeting of the
                       Association for Computational Linguistics (ACL), Berlin},
      pdf            = {https://aclweb.org/anthology/W/W16/W16-3212.pdf},
      year           = 2016
    }
  • [DOI] A. Mehler, R. Gleim, T. vor der Brück, W. Hemati, T. Uslu, and S. Eger, “Wikidition: Automatic Lexiconization and Linkification of Text Corpora,” Information Technology, pp. 70-79, 2016.
    [Abstract] [BibTeX]

    We introduce a new text technology, called Wikidition, which automatically generates large scale editions of corpora of natural language texts. Wikidition combines a wide range of text mining tools for automatically linking lexical, sentential and textual units. This includes the extraction of corpus-specific lexica down to the level of syntactic words and their grammatical categories. To this end, we introduce a novel measure of text reuse and exemplify Wikidition by means of the capitularies, that is, a corpus of Medieval Latin texts.
    @Article{Mehler:et:al:2016,
      Author         = {Alexander Mehler and Rüdiger Gleim and Tim vor der
                       Brück and Wahed Hemati and Tolga Uslu and Steffen Eger},
      Title          = {Wikidition: Automatic Lexiconization and
                       Linkification of Text Corpora},
      Journal        = {Information Technology},
      Pages          = {70-79},
      abstract       = {We introduce a new text technology, called Wikidition,
    which automatically generates large scale editions of
    corpora of natural language texts. Wikidition combines
    a wide range of text mining tools for automatically
    linking lexical, sentential and textual units. This
    includes the extraction of corpus-specific lexica down
    to the level of syntactic words and their grammatical
    categories. To this end, we introduce a novel measure
    of text reuse and exemplify Wikidition by means of the
    capitularies, that is, a corpus of Medieval Latin
    texts.},
      doi            = {10.1515/itit-2015-0035},
      year           = 2016
    }