Sight to touch: 3D diffeomorphic deformation recovery with mixture components for perceiving forces in robotic-assisted surgery

dc.contributor
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial
dc.contributor
Universitat Politècnica de Catalunya. GRINS - Grup de Recerca en Robòtica Intel·ligent i Sistemes
dc.contributor.author
Avilés Rivero, Angélica
dc.contributor.author
Alsaleh, Samar M.
dc.contributor.author
Casals Gelpí, Alicia
dc.date.issued
2017
dc.identifier
Avilés, A., Alsaleh, S., Casals, A. Sight to touch: 3D diffeomorphic deformation recovery with mixture components for perceiving forces in robotic-assisted surgery. A: IEEE/RSJ International Conference on Intelligent Robots and Systems. "2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): 24-28 Sept. 2017". Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 160-165.
dc.identifier
978-1-5386-2682-5
dc.identifier
https://hdl.handle.net/2117/114875
dc.identifier
10.1109/IROS.2017.8202152
dc.description.abstract
Robotic-assisted minimally invasive surgical sys-tems suffer from one major limitation which is the lack of interaction forces feedback. The restricted sense of touch hinders the surgeons’ performance and reduces their dexterity and precision during a procedure. In this work, we present a sensory substitution approach that relies on visual stimuli to transmit the tool-tissue interaction forces to the operating surgeon. Our approach combines a 3D diffeomorphic defor-mation mapping with a generative model to precisely label the force level. The main highlights of our approach are that the use of diffeomorphic transformation ensures anatomical structure preservation and the label assignment is based on a parametric form of several mixture elements. We performed experimentations on both ex-vivo and in-vivo datasets and offer careful numerical results evaluating our approach. The results show that our solution has an error measure less than 1mm in all directions and an average labeling error of 2.05%. It can also be applicable to other scenarios that require force feedback such as microsurgery, knot tying or needle-based procedures.
dc.description.abstract
Peer Reviewed
dc.description.abstract
Postprint (author's final draft)
dc.format
6 p.
dc.format
application/pdf
dc.language
eng
dc.publisher
Institute of Electrical and Electronics Engineers (IEEE)
dc.relation
http://ieeexplore.ieee.org/document/8202152/
dc.rights
Open Access
dc.subject
Àrees temàtiques de la UPC::Informàtica::Robòtica
dc.subject
Biomechanics
dc.subject
Three dimensional imaging in medicine
dc.subject
Robotics in medicine
dc.subject
Vision based force estimation
dc.subject
Robot assisted minimally invasive surgery
dc.subject
Topology preservation
dc.subject
Biomecànica
dc.subject
Imatges tridimensionals en medicina
dc.subject
Robòtica en medicina
dc.title
Sight to touch: 3D diffeomorphic deformation recovery with mixture components for perceiving forces in robotic-assisted surgery
dc.type
Conference report


Fitxers en aquest element

FitxersGrandàriaFormatVisualització

No hi ha fitxers associats a aquest element.

Aquest element apareix en la col·lecció o col·leccions següent(s)

E-prints [73008]