Reinforcement Learning for Non-Stationary Markov Decision Processes: The Blessing of (More) Optimism

We consider un-discounted reinforcement learning (RL) in Markov decision processes (MDPs) under drifting non-stationarity, i.e., both the reward and state transition distributions are allowed to evolve over time, as long as their respective total variations, quantified by suitable metrics, do not ex...

Full description

Bibliographic Details
Main Authors: Cheung, Wang Chi (Author), Simchi-Levi, David (Author), Zhu, Ruihao (Author)
Format: Article
Language:English
Published: 2021-11-03T17:29:34Z.
Subjects:
Online Access:Get fulltext
LEADER 01610 am a22001693u 4500
001 137255
042 |a dc 
100 1 0 |a Cheung, Wang Chi  |e author 
700 1 0 |a Simchi-Levi, David  |e author 
700 1 0 |a Zhu, Ruihao  |e author 
245 0 0 |a Reinforcement Learning for Non-Stationary Markov Decision Processes: The Blessing of (More) Optimism 
260 |c 2021-11-03T17:29:34Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/137255 
520 |a We consider un-discounted reinforcement learning (RL) in Markov decision processes (MDPs) under drifting non-stationarity, i.e., both the reward and state transition distributions are allowed to evolve over time, as long as their respective total variations, quantified by suitable metrics, do not exceed certain variation budgets. We first develop the Sliding Window Upper-Confidence bound for Reinforcement Learning with Confidence Widening (SWUCRL2-CW) algorithm, and establish its dynamic regret bound when the variation budgets are known. In addition, we propose the Bandit-over-Reinforcement Learning (BORL) algorithm to adaptively tune the SWUCRL2-CW algorithm to achieve the same dynamic regret bound, but in a parameter-free manner, i.e., without knowing the variation budgets. Notably, learning non-stationary MDPs via the conventional optimistic exploration technique presents a unique challenge absent in existing (non-stationary) bandit learning settings. We overcome the challenge by a novel confidence widening technique that incorporates additional optimism. 
546 |a en 
655 7 |a Article 
773 |t INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119