Neural scene de-rendering

We study the problem of holistic scene understanding. We would like to obtain a compact, expressive, and interpretable representation of scenes that encodes information such as the number of objects and their categories, poses, positions, etc. Such a representation would allow us to reason about and...

Full description

Bibliographic Details
Main Authors: Wu, Jiajun (Author), Tenenbaum, Joshua B (Author), Kohli, Pushmeet (Author)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE), 2020-08-18T20:41:05Z.
Subjects:
Online Access:Get fulltext
LEADER 02179 am a22001933u 4500
001 126659
042 |a dc 
100 1 0 |a Wu, Jiajun  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
700 1 0 |a Tenenbaum, Joshua B  |e author 
700 1 0 |a Kohli, Pushmeet  |e author 
245 0 0 |a Neural scene de-rendering 
260 |b Institute of Electrical and Electronics Engineers (IEEE),   |c 2020-08-18T20:41:05Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/126659 
520 |a We study the problem of holistic scene understanding. We would like to obtain a compact, expressive, and interpretable representation of scenes that encodes information such as the number of objects and their categories, poses, positions, etc. Such a representation would allow us to reason about and even reconstruct or manipulate elements of the scene. Previous works have used encoder-decoder based neural architectures to learn image representations; however, representations obtained in this way are typically uninterpretable, or only explain a single object in the scene. In this work, we propose a new approach to learn an interpretable distributed representation of scenes. Our approach employs a deterministic rendering function as the decoder, mapping a naturally structured and disentangled scene description, which we named scene XML, to an image. By doing so, the encoder is forced to perform the inverse of the rendering operation (a.k.a. de-rendering) to transform an input image to the structured scene XML that the decoder used to produce the image. We use a object proposal based encoder that is trained by minimizing both the supervised prediction and the unsupervised reconstruction errors. Experiments demonstrate that our approach works well on scene de-rendering with two different graphics engines, and our learned representation can be easily adapted for a wide range of applications like image editing, inpainting, visual analogy-making, and image captioning. 
546 |a en 
655 7 |a Article 
773 |t 10.1109/CVPR.2017.744 
773 |t IEEE Conference on Computer Vision and Pattern Recognition (CVPR)