Active Exploration and Parameterized Reinforcement Learning Applied to a Simulated Human-Robot Interaction Task - Architectures et Modèles de l'Adaptation et de la Cognition Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Active Exploration and Parameterized Reinforcement Learning Applied to a Simulated Human-Robot Interaction Task

George Velentzas
  • Fonction : Auteur
Theodore Tsitsimis
  • Fonction : Auteur
Costas Tzafestas
  • Fonction : Auteur

Résumé

Online model-free reinforcement learning (RL) methods with continuous actions are playing a prominent role when dealing with real-world applications such as Robotics. However, when confronted to non-stationary environments, these methods crucially rely on an exploration-exploitation tradeoff which is rarely dynamically and automatically adjusted to changes in the environment. Here we propose an active exploration algorithm for RL in structured (parameterized) continuous action space. This framework deals with a set of discrete actions, each of which is parameterized with continuous variables. Discrete exploration is controlled through a Boltzmann softmax function with an inverse temperature β parameter. In parallel, a Gaussian exploration is applied to the continuous action parameters. We apply a meta-learning algorithm based on the comparison between variations of short-term and longterm reward running averages to simultaneously tune β and the width of the Gaussian distribution from which continuous action parameters are drawn. We first show that this algorithm reaches state-of-the-art performance in the non-stationary multi-armed bandit paradigm, while also being generalizable to continuous actions and multi-step tasks. We then apply it to a simulated human-robot interaction task, and show that it outperforms continuous parameterized RL both without active exploration and with active exploration based on uncertainty variations measured by a Kalman-Q-learning algorithm.
Fichier principal
Vignette du fichier
Khamassi2017_RobotComputing_PID4650993.pdf (2.45 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03774983 , version 1 (12-09-2022)

Identifiants

Citer

Mehdi Khamassi, George Velentzas, Theodore Tsitsimis, Costas Tzafestas. Active Exploration and Parameterized Reinforcement Learning Applied to a Simulated Human-Robot Interaction Task. First IEEE International Conference on Robotic Computing (IRC) 2017, Apr 2017, Taichung, France. pp.28-35, ⟨10.1109/IRC.2017.33⟩. ⟨hal-03774983⟩
6 Consultations
73 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More