LREC 2000 2nd International Conference on Language Resources & Evaluation | |
Conference Papers
Papers by paper title: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Papers by ID number: 1-50, 51-100, 101-150, 151-200, 201-250, 251-300, 301-350, 351-377. |
Previous Paper Next Paper
Title | The Evaluation of Systems for Cross-language Information Retrieval |
Authors |
Braschler Martin (Eurospider Information Technology AG, Zurich, Switzerland. braschler@eurospider.com) Harman Donna (National Institute of Standards and Technology, Gaithersburg, Md, USA. donna.harman@nist.gov) Hess Michael (Dept. of Information Technology, University of Zurich, Switzerland. hess@ifi.unizh.ch) Kluck Michael (InformationsZentrum Sozialwissenschaften (IZ), Bonn, Germany. mkl@bonn.iz-soz.de) Peters Carol (Istituto di Elaborazione della Informazione - CNR, Pisa, Italy. carol@iei.pi.cnr.it) Schauble Peter (Eurospider Information Technology AG, Zurich, Switzerland. schauble@eurospider.com) |
Keywords | Cross-language Information Retrieval, Multilingual Test Collections, System Evaluation Methodologies |
Session | Session EO5 - Information Retrieval and Question Answering Evaluation |
Abstract | We describe the creation of an infrastructure for the testing of cross-language text retrieval systems within the context of the Text REtrieval Conferences (TREC) organised by the US National Institute of Standards and Technology (NIST). The approach adopted and the issues that had to be taken into consideration when building a multilingual test suite and developing appropriate evaluation procedures to test cross-language systems are described. From 2000 on, a cross-language evaluation activity for European languages known as CLEF (Cross-Language Evaluation Forum) will be coordinated in Europe, while TREC will focus on Asian languages. The implications of the move to Europe and the intentions for the future are discussed. |