Abstract

 
Dynamic Coattention Network (DCN) was introduced in late 2016 and achieved state-of-the-art performance on Stanford Question Answering Dataset (SQuAD). In this paper, we re-implement DCN and explore different extensions to DCN, including multi-tasking learning with Quora question pairs dataset, different loss function that account for distance from truth, variation of sentinel vectors, novel pre-processing trick, modification to coattention encoder architecture, as well as hyperparameter tuning. After joint training, we observe a 2\% increase in f1 on Quora dataset. Our conclusion is that multi-task learning benefits the simpler task more than the more complicated task. On coda leaderboard, we achieved f1 = 67.546, EM = 56.66.

Multitask Learning and Extensions of Dynamic Coattention Network