Leveraging Ontological Knowledge for Neural Language Models

Published in Young Researchers' Symposium, CoDS-COMAD 2019, Kolkata, India, 2019

Abstract

Neural Language Models such as Word2Vec and GloVe have been shown to encode semantic relatedness between words. Improvements in unearthing these embeddings can ameliorate performance in numerous downstream applications such as sentiment analysis, question answering, and dialogue generation. Lexical ontologies such as WordNet are known to supply information about semantic similarity rather than relatedness. Further, extracting word embeddings from small corpora is daunting for data-hungry neural networks. This work shows how methods that conflate Word2Vec and Ontologies can achieve better performance, reduce training time and help adapt to domains with a minimum amount of data.


Recommended citation: Deshpande, A., & Jegadeesan, M. (2019, January). Leveraging Ontological Knowledge for Neural Language Models. In Proceedings of the ACM India Joint International Conference on Data Science and Management of Data (pp. 350-353). ACM.