论文标题

马尔可夫采样下的亚当型增强学习算法的非反应收敛性

Non-asymptotic Convergence of Adam-type Reinforcement Learning Algorithms under Markovian Sampling

论文作者

Xiong, Huaqing, Xu, Tengyu, Liang, Yingbin, Zhang, Wei

论文摘要

尽管亚当在加强学习(RL)中的广泛应用,但尚未确定亚当型RL算法的理论融合。本文为两种基本的RL算法(PG)和时间差异(TD)学习提供了第一个此类收敛分析,该算法分别称为AMSGRAD更新(理论分析中ADAM的标准替代方案),分别称为PG-AMSGRAD和TD-AMSGRAD。此外,我们的分析侧重于两种算法的马尔可夫采样。我们表明,在一般的非线性函数近似值下,PG-AMSGrad具有恒定步骤的pg-amsgrad收敛到固定点的邻域,速率为$ \ nathcal {o}(1/t)$(其中$ t $表示迭代的数量),并以$ \ Mathc的速度确切地将步骤与固定点的速度降低,并在2}上。 t/\ sqrt {t})$。此外,在线性函数近似下,具有恒定步骤的TD-AMSGRAD收敛到全局最佳的邻域,在$ \ Mathcal {o}(1/T)$的速率下,并且步骤缩小的步骤缩小完全收敛到全局最佳的速度,以$ \ \ valcal {o}(\ log log}(\ log log t/\ s \ s \ s \ s \ ssq sq sq sqs){我们的研究开发了新技术,用于分析马尔可夫采样下的亚当型RL算法。

Despite the wide applications of Adam in reinforcement learning (RL), the theoretical convergence of Adam-type RL algorithms has not been established. This paper provides the first such convergence analysis for two fundamental RL algorithms of policy gradient (PG) and temporal difference (TD) learning that incorporate AMSGrad updates (a standard alternative of Adam in theoretical analysis), referred to as PG-AMSGrad and TD-AMSGrad, respectively. Moreover, our analysis focuses on Markovian sampling for both algorithms. We show that under general nonlinear function approximation, PG-AMSGrad with a constant stepsize converges to a neighborhood of a stationary point at the rate of $\mathcal{O}(1/T)$ (where $T$ denotes the number of iterations), and with a diminishing stepsize converges exactly to a stationary point at the rate of $\mathcal{O}(\log^2 T/\sqrt{T})$. Furthermore, under linear function approximation, TD-AMSGrad with a constant stepsize converges to a neighborhood of the global optimum at the rate of $\mathcal{O}(1/T)$, and with a diminishing stepsize converges exactly to the global optimum at the rate of $\mathcal{O}(\log T/\sqrt{T})$. Our study develops new techniques for analyzing the Adam-type RL algorithms under Markovian sampling.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源