Title |
Representing Multimodal Linguistic Annotated Data |
Authors |
Brigitte Bigi, Tatsuya Watanabe and Laurent Prévot |
Abstract |
The question of interoperability for linguistic annotated resources covers different aspects. First, it requires a representation framework making it possible to compare, and eventually merge, different annotation schema. In this paper, a general description level representing the multimodal linguistic annotations is proposed. It focuses on time representation and on the data content representation: This paper reconsiders and enhances the current and generalized representation of annotations. An XML schema of such annotations is proposed. A Python API is also proposed. This framework is implemented in a multi-platform software and distributed under the terms of the GNU Public License. |
Topics |
Corpus (Creation, Annotation, etc.), Tools, Systems, Applications |
Full paper |
Representing Multimodal Linguistic Annotated Data |
Bibtex |
@InProceedings{BIGI14.51,
author = {Brigitte Bigi and Tatsuya Watanabe and Laurent Prévot}, title = {Representing Multimodal Linguistic Annotated Data}, booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)}, year = {2014}, month = {may}, date = {26-31}, address = {Reykjavik, Iceland}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-8-4}, language = {english} } |