Leveraging Ontological Knowledge for Neural Language Models

Published in Young Researchers' Symposium, CoDS-COMAD 2019, Kolkata, India, 2019

Paper

Short Summary

  • Achieved higher performance (5%) and faster convergence (35%) in Word2Vec by using WordNet for weight-initialization.
  • Portrayed enhanced semantic similarity by performing domain-transfer of vectors using RCM model and WordNet Domain.
  • Proposed the HRCM model and H-Ordinal Constraints for hierarchy aware vectors to settle the data-knowledge trade-off.

Abstract

Neural Language Models such as Word2Vec and GloVe have been shown to encode semantic relatedness between words. Improvements in unearthing these embeddings can ameliorate performance in numerous downstream applications such as sentiment analysis, question answering, and dialogue generation. Lexical ontologies such as WordNet are known to supply information about semantic similarity rather than relatedness. Further, extracting word embeddings from small corpora is daunting for data-hungry neural networks. This work shows how methods that conflate Word2Vec and Ontologies can achieve better performance, reduce training time and help adapt to domains with a minimum amount of data.