论文标题

国家积极的促进者:合作多机构增强学习中的协调和环境异质性

Stateful active facilitator: Coordination and Environmental Heterogeneity in Cooperative Multi-Agent Reinforcement Learning

论文作者

Liu, Dianbo, Shah, Vedant, Boussif, Oussama, Meo, Cristian, Goyal, Anirudh, Shu, Tianmin, Mozer, Michael, Heess, Nicolas, Bengio, Yoshua

论文摘要

在合作的多机构增强学习中,一组代理团队共同努力实现共同的目标。不同的环境或任务可能需要不同程度的代理之间的协调,以便以最佳方式实现目标。协调的性质将取决于环境的特性 - 其空间布局,障碍物的分布,动力学等。我们将环境中属性的这种变化称为异质性。现有文献尚未充分解决以下事实:不同的环境可能具有不同水平的异质性。我们正式化了环境的协调水平和异质性水平的概念,并目前的Hecodrid是一套多代理RL环境,通过提供对环境的协调和异质性水平的定量控制,从而促进了不同MARL方法的经验评估。此外,我们提出了一种集中式培训,分散的执行学习方法称为状态主动促进者(SAF),该方法使代理人能够通过从共享策略库中使用的培训和动态选择时使用的可区分和共享的知识来源在高协调和高质量的环境中有效地工作。我们评估SAF并将其性能与Hecogrid上的基线IPPO和MAPPO进行比较。我们的结果表明,SAF始终超过不同任务以及不同的异质性和协调水平的基准。我们发布了Hecodrid的代码以及所有实验。

In cooperative multi-agent reinforcement learning, a team of agents works together to achieve a common goal. Different environments or tasks may require varying degrees of coordination among agents in order to achieve the goal in an optimal way. The nature of coordination will depend on the properties of the environment -- its spatial layout, distribution of obstacles, dynamics, etc. We term this variation of properties within an environment as heterogeneity. Existing literature has not sufficiently addressed the fact that different environments may have different levels of heterogeneity. We formalize the notions of coordination level and heterogeneity level of an environment and present HECOGrid, a suite of multi-agent RL environments that facilitates empirical evaluation of different MARL approaches across different levels of coordination and environmental heterogeneity by providing a quantitative control over coordination and heterogeneity levels of the environment. Further, we propose a Centralized Training Decentralized Execution learning approach called Stateful Active Facilitator (SAF) that enables agents to work efficiently in high-coordination and high-heterogeneity environments through a differentiable and shared knowledge source used during training and dynamic selection from a shared pool of policies. We evaluate SAF and compare its performance against baselines IPPO and MAPPO on HECOGrid. Our results show that SAF consistently outperforms the baselines across different tasks and different heterogeneity and coordination levels. We release the code for HECOGrid as well as all our experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源