Visualizing and Understanding Deep Learning Models in NLP
The proposed attempts offer interpretable explanations for various aspects of neural models such as how words' meaning composes to form higher-level language units such as phrases or sentences; how neural models select and filter important words; and why some models perform better than others. More importantly, they provide efficient tools to conduct error analysis that can be used on different neural architectures across various NLP applications, which have potential to improve the effectiveness of a wide variety of NLP systems.
References:Understanding Neural Networks through Representation Erasure:
Visualizing and Understanding Neural Models in NLP:
Bio: Jiwei Li got his B.S. in Biology from Peking University (2008-2012) and Ph.D in Computer Science from Stanford University (2014-2017). He was a winner of Facebook Fellowship 2015 and Baidu Fellowship 2016. He works on Natural Language Processing, advised by Prof. Dan Jurafsky.