Robotic manipulation continues to be an active area of research due to its broad range of real-world applications. Among its benchmark tasks, the peg-in hole problem remains particularly challenging, requiring high-precision control under environmental uncertainty. This thesis presents a framework based on Deep Reinforcement Learning (DRL) to train a robotic manipulator to autonomously solve the peg-in-hole task. The proposed approach uses curriculum learning to train a single policy capable of handling all phases of the task: approach, contact-based hole search, and insertion. The curriculum is further extended to incorporate observation noise and force penalization, encouraging the emergence of compliant behaviors during contact. Training is conducted in a custom-designed, physics-based simulation environment. Simulation results demonstrate that the learned policy can complete the peg-in-hole task, though it faces difficulties in balancing task success with compliant interaction. To evaluate the potential for real-world deployment, the trained policy is transferred to a physical robot. Tests reveal several sources of sim-to-real discrepancy, particularly in the modeling of contact dynamics. Nonetheless, partial success in real-world trials suggests the viability of sim-to-real transfer for DRL-trained policies. Overall, this work contributes to the understanding of DRL’s capabilities and limitations in solving complex robotic manipulation tasks such as peg-in-hole assembly.
9
Master's final project
English
DRL (Deep Reinforcement Learning); Deep learning (Machine learning); Aprenentatge profund (Aprenentatge automàtic); Robots -- Control systems; Sim-to-real transfer; Peg-in-hole task; Robots -- Sistemes de control
Universitat de Girona. Institut de Recerca en Visió per Computador i Robòtica
Attribution-NonCommercial-NoDerivatives 4.0 International
http://creativecommons.org/licenses/by-nc-nd/4.0/