Constraints Driven Learning for Natural Language Understanding

213
Опубликовано 17 августа 2016, 2:35
Intelligent Information Access and Extraction suggest significant challenges for Natural Language analysis. Tasks of interest include semantic role labeling (determining who did what to whom, when and where), information extraction (identifying entities, relations and events), following natural language instructions, and textual entailment (determining whether one utterance is a likely consequence of another). A computational approach to these challenges often involves assigning values to sets of interdependent variables and thus frequently necessitates performing global inference that accounts for these interdependencies. This talk presents research on Constrained Conditional Models (CCMs), a framework that augments probabilistic models with declarative constraints as a way to support such decisions. We will present a framework we introduced a few years ago, formulating decision problems in NLP as Integer Linear Programming problems, but focus on new algorithms for training these global models using indirect supervision signals. Learning models for structured tasks is difficult partly since generating supervision signals is costly. We show that it is often easy to obtain a related indirect supervision signal, and discuss several options for deriving this supervision signal, including inducing it from the world's response to the model's actions. Our learning framework is ΓÇ£Constraints DrivenΓÇ¥ in the sense that it allows and even gains from global inference that guided by expressive declarative knowledge (encoded as constraints). Experimental results show the significant contribution of easy-to-get indirect supervision on NLP tasks such as information extraction, Transliteration and Textual Entailment.
автотехномузыкадетское