TY - UNPB
T1 - Convergence Proof for Actor-Critic Methods Applied to PPO and RUDDER
AU - Holzleitner, Markus
AU - Gruber, Lukas
AU - Arjona Medina, Jose
AU - Brandstetter, Johannes
AU - Hochreiter, Sepp
PY - 2020
Y1 - 2020
N2 - We prove under commonly used assumptions the convergence of actor-critic reinforcement learning algorithms, which simultaneously learn a policy function, the actor, and a value function, the critic. Both functions can be deep neural networks of arbitrary complexity. Our framework allows showing convergence of the well known Proximal Policy Optimization (PPO) and of the recently introduced RUDDER. For the convergence proof we employ recently introduced techniques from the two time-scale stochastic approximation theory. Our results are valid for actor-critic methods that use episodic samples and that have a policy that becomes more greedy during learning. Previous convergence proofs assume linear function approximation, cannot treat episodic examples, or do not consider that policies become greedy. The latter is relevant since optimal policies are typically deterministic.
AB - We prove under commonly used assumptions the convergence of actor-critic reinforcement learning algorithms, which simultaneously learn a policy function, the actor, and a value function, the critic. Both functions can be deep neural networks of arbitrary complexity. Our framework allows showing convergence of the well known Proximal Policy Optimization (PPO) and of the recently introduced RUDDER. For the convergence proof we employ recently introduced techniques from the two time-scale stochastic approximation theory. Our results are valid for actor-critic methods that use episodic samples and that have a policy that becomes more greedy during learning. Previous convergence proofs assume linear function approximation, cannot treat episodic examples, or do not consider that policies become greedy. The latter is relevant since optimal policies are typically deterministic.
UR - https://arxiv.org/abs/2012.01399
U2 - 10.48550/arXiv.2012.01399
DO - 10.48550/arXiv.2012.01399
M3 - Preprint
T3 - arXiv.org
BT - Convergence Proof for Actor-Critic Methods Applied to PPO and RUDDER
ER -