Fourier Features in Reinforcement Learning with Neural Networks - Institut Polytechnique de Paris
Article Dans Une Revue Transactions on Machine Learning Research Journal Année : 2023

Fourier Features in Reinforcement Learning with Neural Networks

Résumé

In classic Reinforcement Learning (RL), encoding the inputs with a Fourier feature mapping is a standard way to facilitate generalization and add prior domain knowledge. In Deep RL, such input encodings are less common since they could, in principle, be learned by the network and may therefore seem less beneficial. In this paper, we present experiments on Multilayer Perceptrons (MLP) that indicate that even in Deep RL, Fourier features can lead to significant performance gains in both rewards and sample efficiency. Furthermore, we observe that they increase the robustness with respect to hyperparameters, lead to smoother policies, and benefit the training process by reducing learning interference, encouraging sparsity, and increasing the expressiveness of the learned features. However, a major bottleneck with conventional Fourier features is that the number of features increases exponentially with the state dimension. As a remedy, we propose a simple, light version that only has a linear number of features yet empirically provides similar benefits. Our experiments cover both shallow/deep, discrete/continuous, and on/off-policy RL settings.

Mots clés

Fichier principal
Vignette du fichier
918_fourier_features_in_reinforcem.pdf (15.21 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04316346 , version 1 (30-11-2023)

Identifiants

  • HAL Id : hal-04316346 , version 1

Citer

David Brellmann, David Filliat, Goran Frehse. Fourier Features in Reinforcement Learning with Neural Networks. Transactions on Machine Learning Research Journal, 2023. ⟨hal-04316346⟩
104 Consultations
128 Téléchargements

Partager

More