SUMMARY : Session P14-GW
Title | Training Language Models without Appropriate Language Resources: Experiments with an AAC System for Disabled People |
---|---|
Authors | T. Wandmacher, J. Antoine |
Abstract | Statistical Language Models (LM) are highly dependent on their training resources. This makes it not only difficult to interpret evaluation results, it also has a deteriorating effect on the use of an LM-based application. This question has already been studied by others. Considering a specific domain (text prediction in a communication aid for handicapped people) we want to address the problem from a different point of view: the influence of the language register. Considering corpora from five different registers, we want to discuss three methods to adapt a language model to its actual language resource ultimately reducing the effect of training dependency: (a) A simple cache model augmenting the probability of the n last inserted words; (b) a user dictionary, keeping every unseen word; and (c) a combined LM interpolating a base model with a dynamically updated user model. Our evaluation is based on the results obtained from a text prediction system working on a trigram LM. |
Keywords | Language Models, Adaptation, Text Prediction, Augmentative and Alternative Communication. |
Full paper | Training Language Models without Appropriate Language Resources: Experiments with an AAC System for Disabled People |