Unanimous Prediction for 100\% Precision with Application to Learning Semantic Mappings

© 2016 Association for Computational Linguistics. Can we train a system that, on any new input, either says "don't know" or makes a prediction that is guaranteed to be correct? We answer the question in the affirmative provided our model family is wellspecified. Specifically, we intro...

Full description

Bibliographic Details
Main Authors: Khani, Fereshte (Author), Rinard, Martin (Author), Liang, Percy (Author)
Format: Article
Language:English
Published: Association for Computational Linguistics, 2021-11-01T18:43:45Z.
Subjects:
Online Access:Get fulltext
LEADER 01401 am a22001693u 4500
001 137040
042 |a dc 
100 1 0 |a Khani, Fereshte  |e author 
700 1 0 |a Rinard, Martin  |e author 
700 1 0 |a Liang, Percy  |e author 
245 0 0 |a Unanimous Prediction for 100\% Precision with Application to Learning Semantic Mappings 
260 |b Association for Computational Linguistics,   |c 2021-11-01T18:43:45Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/137040 
520 |a © 2016 Association for Computational Linguistics. Can we train a system that, on any new input, either says "don't know" or makes a prediction that is guaranteed to be correct? We answer the question in the affirmative provided our model family is wellspecified. Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output. We operationalize this principle for semantic parsing, the task of mapping utterances to logical forms. We develop a simple, efficient method that reasons over the infinite set of all consistent models by only checking two of the models. We prove that our method obtains 100% precision even with a modest amount of training data from a possibly adversarial distribution. Empirically, we demonstrate the effectiveness of our approach on the standard GeoQuery dataset. 
546 |a en 
655 7 |a Article 
773 |t 10.18653/v1/p16-1090