Nonstationary Stochastic Bandits: UCB Policies and Minimax Regret

We study the nonstationary stochastic Multi-Armed Bandit (MAB) problem in which the distributions of rewards associated with arms are assumed to be time-varying and the total variation in the expected rewards is subject to a variation budget. The regret of a policy is defined by the difference in th...

全面介紹

書目詳細資料
發表在:IEEE Open Journal of Control Systems
Main Authors: Lai Wei, Vaibhav Srivastava
格式: Article
語言:英语
出版: IEEE 2024-01-01
主題:
在線閱讀:https://ieeexplore.ieee.org/document/10460198/