Publications

*Please click on individual links for more information

Guiding Attention for Self-Supervised Learning with Transformers

Published in Findings of EMNLP , EMNLP-2020, Virtual Event, 2020

We introduce a method called attention-guidance, which uses intuitive priors to modify self-attention heads in Transformer models. Our approach gives faster convergence and better downstream performance, and allows convergence of large models on as little as 4 GPUs.

Download here

Leveraging Ontological Knowledge for Neural Language Models

Published in Young Researchers' Symposium, CoDS-COMAD 2019, Kolkata, India, 2019

We propose and experiment with methods to combine the use of ontologies and function approximaters. We pose the methods as a type of data-knowledge trade-off and achieve superior performance on muliple tasks

Download here