论文标题

Q学习统一方差:大折扣不是快速学习的障碍

Q-learning with Uniformly Bounded Variance: Large Discounting is Not a Barrier to Fast Learning

论文作者

Devraj, Adithya M., Meyn, Sean P.

论文摘要

样本复杂性界限是强化学习文献中常见的性能指标。在折扣成本,无限的地平线设置中,所有已知的界限都具有$ 1/(1-γ)$的多项式因素,其中$γ<1 $是折扣系数。对于一个很大的折扣因素,这些界限似乎意味着要实现$ \ varepsilon $ - 最佳政策需要大量样本。本工作的目的是引入一种新的算法,这些算法的样本复杂性均匀地构成了所有$γ<1 $。人们可能会争辩说,由于最近的最小最大下限,这是不可能的。解释是,以前的下限是针对特定问题的,我们对此进行了修改,而不会损害获得$ \ varepsilon $ - 最佳策略的最终目标。具体而言,我们表明,具有优化的步进序列的Q学习算法的渐近协方差是$ 1/(1-γ)$的二次函数;一个预期的,本质上已知的结果。此处提出的新的相对Q学习算法显示具有渐近协方差,该算法是$ 1/(1-ρ^*γ)$的二次协方差,其中$ 1-ρ^*> 0 $是最佳过渡矩阵的光谱间隙上的上限。

Sample complexity bounds are a common performance metric in the Reinforcement Learning literature. In the discounted cost, infinite horizon setting, all of the known bounds have a factor that is a polynomial in $1/(1-γ)$, where $γ< 1$ is the discount factor. For a large discount factor, these bounds seem to imply that a very large number of samples is required to achieve an $\varepsilon$-optimal policy. The objective of the present work is to introduce a new class of algorithms that have sample complexity uniformly bounded for all $γ< 1$. One may argue that this is impossible, due to a recent min-max lower bound. The explanation is that this previous lower bound is for a specific problem, which we modify, without compromising the ultimate objective of obtaining an $\varepsilon$-optimal policy. Specifically, we show that the asymptotic covariance of the Q-learning algorithm with an optimized step-size sequence is a quadratic function of $1/(1-γ)$; an expected, and essentially known result. The new relative Q-learning algorithm proposed here is shown to have asymptotic covariance that is a quadratic in $1/(1- ρ^* γ)$, where $1 - ρ^* > 0$ is an upper bound on the spectral gap of an optimal transition matrix.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源