|
|
|
|
LEADER |
01894 am a22002053u 4500 |
001 |
137308 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Roy, Subhro
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
|e contributor
|
700 |
1 |
0 |
|a Noseworthy, Michael
|e author
|
700 |
1 |
0 |
|a Paul, Rohan
|e author
|
700 |
1 |
0 |
|a Park, Daehyung
|e author
|
700 |
1 |
0 |
|a Roy, Nicholas
|e author
|
245 |
0 |
0 |
|a Leveraging Past References for Robust Language Grounding
|
260 |
|
|
|b Association for Computational Linguistics (ACL),
|c 2021-11-03T19:18:05Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/137308
|
520 |
|
|
|a © 2019 Association for Computational Linguistics. Grounding referring expressions to objects in an environment has traditionally been considered a one-off, ahistorical task. However, in realistic applications of grounding, multiple users will repeatedly refer to the same set of objects. As a result, past referring expressions for objects can provide strong signals for grounding subsequent referring expressions. We therefore reframe the grounding problem from the perspective of coreference detection and propose a neural network that detects when two expressions are referring to the same object. The network combines information from vision and past referring expressions to resolve which object is being referred to. Our experiments show that detecting referring expression coreference is an effective way to ground objects described by subtle visual properties, which standard visual grounding models have difficulty capturing. We also show the ability to detect object coreference allows the grounding model to perform well even when it encounters object categories not seen in the training data.
|
546 |
|
|
|a en
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference
|