Abstract
Reinforcement learning (RL) has arisen as a formidable approach to empower robots to acquire intricate skills by engaging with their surroundings. This study presents a novel application of RL techniques to address the challenging problem of collision avoidance in the context of a Kinova Gen3 robotic arm tasked with ball balancing. The goal of the robot is to maintain dynamic equilibrium of a ball on its end effector while navigating a constrained workspace and avoiding collision with obstacles. Modern RL algorithms are used in this approach to enable the robot to learn collision-free control strategies through data-driven learning. Specifically, this paper employs RL, combining actorcritic architecture with advanced exploration strategies to learn optimal collision avoidance behaviors efficiently. The actor component learns a policy that determines the robot’s actions to maintain the ball’s balance, while the critic component estimates the value function to provide valuable feedback for policy improvement. To facilitate effective learning, a realistic simulation environment was designed. The Kinova Gen3 robotic arm interacts with simulation environment to collect large amounts of data, which are then utilized to train and refine the RL-based collision avoidance policy. The effectiveness and adaptability of the suggested RL-based collision avoidance technique is validated by experimental simulation results. The Kinova Gen3 robotic arm successfully learns to perform the ball balancing task while maintaining safety through intelligent collision avoidance strategies. By showing how RL can be used to help robots accomplish complex tasks like dynamic equilibrium and real-time obstacle avoidance, this research advances robotic control methodologies.
Keywords: Ball balancing, Collision avoidance, Kinova Gen3 robot, Reinforcement learning, Robotics simulation, Soft actor-critic