LREC 2000 2nd International Conference on Language Resources & Evaluation | |
Conference Papers
Papers by paper title: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Papers by ID number: 1-50, 51-100, 101-150, 151-200, 201-250, 251-300, 301-350, 351-377. |
Previous Paper Next Paper
Title | How To Evaluate and Compare Tagsets? A Proposal |
Authors |
Dejean Herve (Seminar fur Sprachwissenschaft, Universitat Tubingen, dejean@sfs.nphil.uni-tuebingen.de) |
Keywords | |
Session | Session EP1 - Evaluation and Written Area |
Abstract | We propose a methodology which allows an evaluation of distributional qualities of a tagset and a comparison between tagsets. Evaluation of tagset is crucial since the task of tagging is often considered as one of the first tasks in language processing. The aim of tagging is to summarise as well as possible linguistic information for further processing such as syntactic parsing. The idea is to consider these further steps in order to evaluate a given tagset, and thus to measure the pertinence of the information provided by the tagset for these steps. For this purpose, a Machine Learning system, ALLiS, is used, whose goal is to learn phrase structures from bracketed corpora and to generate formal grammar which describes these structures. ALLiS learning is based on the detection of structural regularities. By this means, it can be pointed out some non-distributional behaviours of the tagset, and thus some of its weaknesses or its inadequacies. |