A 3D descriptor to detect task-oriented grasping points in clothing

dc.contributor
Institut de Robòtica i Informàtica Industrial
dc.contributor
Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.contributor.author
Ramisa Ayats, Arnau
dc.contributor.author
Alenyà Ribas, Guillem
dc.contributor.author
Moreno-Noguer, Francesc
dc.contributor.author
Torras, Carme
dc.date.issued
2016
dc.identifier
Ramisa , A., Alenyà, G., Moreno-Noguer, F., Torras, C. A 3D descriptor to detect task-oriented grasping points in clothing. "Pattern recognition", 2016, vol. 60, p. 936-948.
dc.identifier
0031-3203
dc.identifier
https://hdl.handle.net/2117/104407
dc.identifier
10.1016/j.patcog.2016.07.003
dc.description.abstract
© 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.description.abstract
Manipulating textile objects with a robot is a challenging task, especially because the garment perception is difficult due to the endless configurations it can adopt, coupled with a large variety of colors and designs. Most current approaches follow a multiple re-grasp strategy, in which clothes are sequentially grasped from different points until one of them yields a recognizable configuration. In this work we propose a method that combines 3D and appearance information to directly select a suitable grasping point for the task at hand, which in our case consists of hanging a shirt or a polo shirt from a hook. Our method follows a coarse-to-fine approach in which, first, the collar of the garment is detected and, next, a grasping point on the lapel is chosen using a novel 3D descriptor. In contrast to current 3D descriptors, ours can run in real time, even when it needs to be densely computed over the input image. Our central idea is to take advantage of the structured nature of range images that most depth sensors provide and, by exploiting integral imaging, achieve speed-ups of two orders of magnitude with respect to competing approaches, while maintaining performance. This makes it especially adequate for robotic applications as we thoroughly demonstrate in the experimental section.
dc.description.abstract
Peer Reviewed
dc.description.abstract
Postprint (author's final draft)
dc.format
13 p.
dc.format
application/pdf
dc.language
eng
dc.relation
http://www.sciencedirect.com/science/article/pii/S0031320316301558
dc.relation
info:eu-repo/grantAgreement/EC/FP7/269959/EU/Intelligent observation and execution of Actions and manipulations/INTELLACT
dc.rights
http://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.rights
Open Access
dc.rights
Attribution-NonCommercial-NoDerivs 3.0 Spain
dc.subject
Àrees temàtiques de la UPC::Informàtica::Robòtica
dc.subject
computer vision
dc.subject
3D descriptor
dc.subject
Recognition
dc.subject
Detection
dc.subject
Grasping
dc.subject
Manipulation
dc.subject
Robotics
dc.subject
Classificació INSPEC::Pattern recognition::Computer vision
dc.title
A 3D descriptor to detect task-oriented grasping points in clothing
dc.type
Article


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

E-prints [72949]