Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Double Q-Learning

less than 1 minute read

Published:

Deep Double Q-Learning — Why you should use it [Link]

portfolio

publications

Leveraging Ontological Knowledge for Neural Language Models

Published in Young Researchers' Symposium, CoDS-COMAD 2019, Kolkata, India, 2019

We propose and experiment with methods to combine the use of ontologies and function approximaters. We pose the methods as a type of data-knowledge trade-off and achieve superior performance on muliple tasks

Download here

Guiding Attention for Self-Supervised Learning with Transformers

Published in Findings of EMNLP , EMNLP-2020, Virtual Event, 2020

We introduce a method called attention-guidance, which uses intuitive priors to modify self-attention heads in Transformer models. Our approach gives faster convergence and better downstream performance, and allows convergence of large models on as little as 4 GPUs.

Download here

talks

Slides for discussions on interesting papers

Published:

  • ELECTRA - This work introduces a new pre-training objective called replaced-token detection which is more efficient than the traditional masked language modeling objective. [Slides]
  • Plug and Play Language Models - This work introduces simple approach that can control the text generation based on attributes. [Slides]
  • Abstractive and Extractive Summarization - Presented two key papers for text summarization using neural networks. [Slides]
  • A Laplacian Framework for Option Discovery in Reinforcement Learning - Presented usefulness of Proto-Value Functions and the paper’s laplacian framework for finding useful options. [Slides]
  • Dataless Classification - Presented the idea of using semantic information in label names and how this idea dovetails with co-training. [Slides]
  • Children search for information as efficiently as adults, but seek additional confirmatory evidence - Presented a cognitive approach to explaining how children and adults search for information equally efficiently but differ in the implementation of the stopping rule along with a primer on ANOVA and Bonferroni corrected multiple comparisons. [Slides]

teaching

Deep Learning Masterclass

Workshop, Indian Institute of Technology Madras, 2018

Conducted classes covering basics of Machine Learning and Deep Learning, Optimization, Regularization, CNNs, Word Vectors, RNNs and Encoder-Decoder Models for an audience of 90 people comprising Undergraduate and Post-Graduate students

Princeton AI4ALL

Workshop, Princeton University, 2020

Research instructor for Princeton AI4ALL, an initiative intended to increase diversity and inclusion in the field of artificial intelligence. Developed material, taught classes, and mentored a group of six students on a Natural Language Processing project on detecting fake news.