LREC 2000 2nd International Conference on Language Resources & Evaluation
 

Previous Paper   Next Paper

Title The Evaluation of Systems for Cross-language Information Retrieval
Authors Braschler Martin (Eurospider Information Technology AG, Zurich, Switzerland. braschler@eurospider.com)
Harman Donna (National Institute of Standards and Technology, Gaithersburg, Md, USA. donna.harman@nist.gov)
Hess Michael (Dept. of Information Technology, University of Zurich, Switzerland. hess@ifi.unizh.ch)
Kluck Michael (InformationsZentrum Sozialwissenschaften (IZ), Bonn, Germany. mkl@bonn.iz-soz.de)
Peters Carol (Istituto di Elaborazione della Informazione - CNR, Pisa, Italy. carol@iei.pi.cnr.it)
Schäuble Peter (Eurospider Information Technology AG, Zurich, Switzerland. schauble@eurospider.com)
Keywords Cross-language Information Retrieval, Multilingual Test Collections, System Evaluation Methodologies
Session Session EO5 - Information Retrieval and Question Answering Evaluation
Full Paper 70.ps, 70.pdf
Abstract We describe the creation of an infrastructure for the testing of cross-language text retrieval systems within the context of the Text REtrieval Conferences (TREC) organised by the US National Institute of Standards and Technology (NIST). The approach adopted and the issues that had to be taken into consideration when building a multilingual test suite and developing appropriate evaluation procedures to test cross-language systems are described. From 2000 on, a cross-language evaluation activity for European languages known as CLEF (Cross-Language Evaluation Forum) will be coordinated in Europe, while TREC will focus on Asian languages. The implications of the move to Europe and the intentions for the future are discussed.