论文标题

具有全球收敛保证的强大MDP中的政策梯度

Policy Gradient in Robust MDPs with Global Convergence Guarantee

论文作者

Wang, Qiuhao, Ho, Chin Pang, Petrik, Marek

论文摘要

强大的马尔可夫决策过程(RMDP)为面对模型错误时计算可靠的策略提供了有希望的框架。许多成功的强化学习算法建立在政策梯度方法的变化上,但是将这些方法调整为RMDP一直具有挑战性。结果,RMDP对大型实用域的适用性仍然有限。本文提出了一种新的双环稳健策略梯度(DRPG),这是RMDPS的第一个通用策略梯度方法。与先前的鲁棒策略梯度算法相反,DRPG单调地减少了近似错误,以确保在表格RMDP中融合到全球最佳策略。我们引入了一种新颖的参数过渡内核,并通过基于梯度的方法解决了内部环路稳健策略。最后,我们的数值结果证明了我们的新算法的实用性,并确认其全局收敛属性。

Robust Markov decision processes (RMDPs) provide a promising framework for computing reliable policies in the face of model errors. Many successful reinforcement learning algorithms build on variations of policy-gradient methods, but adapting these methods to RMDPs has been challenging. As a result, the applicability of RMDPs to large, practical domains remains limited. This paper proposes a new Double-Loop Robust Policy Gradient (DRPG), the first generic policy gradient method for RMDPs. In contrast with prior robust policy gradient algorithms, DRPG monotonically reduces approximation errors to guarantee convergence to a globally optimal policy in tabular RMDPs. We introduce a novel parametric transition kernel and solve the inner loop robust policy via a gradient-based method. Finally, our numerical results demonstrate the utility of our new algorithm and confirm its global convergence properties.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源