More data means less inference: A pseudo-max approach to structured learning
The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference in this setting are intractable. Here we show that it is possible to circumvent this difficulty when the input distribution is rich enough...
Main Authors: | Sontag, David (Author), Meshi, Ofer (Author), Jaakkola, Tommi S. (Contributor), Globerson, Amir (Author) |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor) |
Format: | Article |
Language: | English |
Published: |
Neural Information Processing Systems Foundation,
2011-07-06T14:56:22Z.
|
Subjects: | |
Online Access: | Get fulltext |
Similar Items
-
Learning efficiently with approximate inference via dual losses
by: Meshi, Ofer, et al.
Published: (2011) -
Convergence Rate Analysis of MAP Coordinate Minimization Algorithms
by: Meshi, Ofer, et al.
Published: (2022) -
Convergence Rate Analysis of MAP Coordinate Minimization Algorithms
by: Meshi, Ofer, et al.
Published: (2021) -
Learning bayesian network structure using lp relaxations
by: Jaakkola, Tommi S., et al.
Published: (2011) -
Steps to Excellence: Simple Inference with Refined Scoring of Dependency Trees
by: Zhang, Yuan, et al.
Published: (2015)