|
|
|
|
LEADER |
03016 am a22004333u 4500 |
001 |
126872 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Zeng, Andy
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Mechanical Engineering
|e contributor
|
700 |
1 |
0 |
|a Song, Shuran
|e author
|
700 |
1 |
0 |
|a Yu, Kuan-Ting
|e author
|
700 |
1 |
0 |
|a Donlon, Elliott S
|e author
|
700 |
1 |
0 |
|a Hogan, Francois R.
|e author
|
700 |
1 |
0 |
|a Bauza Villalonga, Maria
|e author
|
700 |
1 |
0 |
|a Ma, Daolin
|e author
|
700 |
1 |
0 |
|a Taylor, Orion Thomas
|e author
|
700 |
1 |
0 |
|a Liu, Melody
|e author
|
700 |
1 |
0 |
|a Romo, Eudald
|e author
|
700 |
1 |
0 |
|a Fazeli, Nima
|e author
|
700 |
1 |
0 |
|a Alet, Ferran
|e author
|
700 |
1 |
0 |
|a Chavan Dafle, Nikhil Narsingh
|e author
|
700 |
1 |
0 |
|a Holladay, Rachel
|e author
|
700 |
1 |
0 |
|a Morena, Isabella
|e author
|
700 |
1 |
0 |
|a Qu Nair, Prem
|e author
|
700 |
1 |
0 |
|a Green, Druck
|e author
|
700 |
1 |
0 |
|a Taylor, Ian
|e author
|
700 |
1 |
0 |
|a Liu, Weber
|e author
|
700 |
1 |
0 |
|a Funkhouser, Thomas
|e author
|
700 |
1 |
0 |
|a Rodriguez, Alberto
|e author
|
245 |
0 |
0 |
|a Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching
|
260 |
|
|
|b Institute of Electrical and Electronics Engineers (IEEE),
|c 2020-09-01T16:02:35Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/126872
|
520 |
|
|
|a This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu.
|
520 |
|
|
|a NSF (Grants IIS-1251217 and VEC 1539014/1539099)
|
546 |
|
|
|a en
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t IEEE International Conference on Robotics and Automation (ICRA)
|