Publication date

2025-06



Abstract

Robotic manipulation continues to be an active area of research due to its broad range of real-world applications. Among its benchmark tasks, the peg-in hole problem remains particularly challenging, requiring high-precision control under environmental uncertainty. This thesis presents a framework based on Deep Reinforcement Learning (DRL) to train a robotic manipulator to autonomously solve the peg-in-hole task. The proposed approach uses curriculum learning to train a single policy capable of handling all phases of the task: approach, contact-based hole search, and insertion. The curriculum is further extended to incorporate observation noise and force penalization, encouraging the emergence of compliant behaviors during contact. Training is conducted in a custom-designed, physics-based simulation environment. Simulation results demonstrate that the learned policy can complete the peg-in-hole task, though it faces difficulties in balancing task success with compliant interaction. To evaluate the potential for real-world deployment, the trained policy is transferred to a physical robot. Tests reveal several sources of sim-to-real discrepancy, particularly in the modeling of contact dynamics. Nonetheless, partial success in real-world trials suggests the viability of sim-to-real transfer for DRL-trained policies. Overall, this work contributes to the understanding of DRL’s capabilities and limitations in solving complex robotic manipulation tasks such as peg-in-hole assembly.


9

Document Type

Master's final project

Language

English

Publisher

Universitat de Girona. Institut de Recerca en Visió per Computador i Robòtica

Recommended citation

This citation was generated automatically.

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International

http://creativecommons.org/licenses/by-nc-nd/4.0/

This item appears in the following Collection(s)