Open Vocabulary Scene Parsing

Recognizing arbitrary objects in the wild has been a challenging problem due to the limitations of existing classification models and datasets. In this paper, we propose a new task that aims at parsing scenes with a large and open vocabulary, and several evaluation metrics are explored for this prob...

Full description

Bibliographic Details
Main Authors: Zhao, Hang (Author), Puig Fernandez, Xavier (Author), Zhou, Bolei (Author), Fidler, Sanja (Author), Torralba, Antonio (Author)
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor), Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE), 2020-01-20T18:41:32Z.
Subjects:
Online Access:Get fulltext
Description
Summary:Recognizing arbitrary objects in the wild has been a challenging problem due to the limitations of existing classification models and datasets. In this paper, we propose a new task that aims at parsing scenes with a large and open vocabulary, and several evaluation metrics are explored for this problem. Our approach is a joint image pixel and word concept embeddings framework, where word concepts are connected by semantic relations. We validate the open vocabulary prediction ability of our framework on ADE20K dataset which covers a wide variety of scenes and objects. We further explore the trained joint embedding space to show its interpretability. Keywords: streaming media; vocabulary; training; semantics; predictive models; visualization
National Science Foundation (U.S.) (Grant 1524817)
Samsung Electronics Co. (Grant 1524817)