SUMMARY : Session P2-W
Title | A Closer Look at Skip-gram Modelling |
---|---|
Authors | D. Guthrie, B. Allison, W. Liu, L. Guthrie, Y. Wilks |
Abstract | Data sparsity is a large problem in natural language processing that refers to the fact that language is a system of rare events, so varied and complex, that even using an extremely large corpus, we can never accurately model all possible strings of words. This paper examines the use of skip-grams (a technique where by n-grams are still stored to model language, but they allow for tokens to be skipped) to overcome the data sparsity problem. We analyze this by computing all possible skip-grams in a training corpus and measure how many adjacent (standard) n-grams these cover in test documents. We examine skip-gram modelling using one to four skips with various amount of training data and test against similar documents as well as documents generated from a machine translation system. In this paper we also determine the amount of extra training data required to achieve skip-gram coverage using standard adjacent tri-grams. |
Keywords | data sparsity, skip-gram, language modelling |
Full paper | A Closer Look at Skip-gram Modelling |