论文标题

关于在增强学习中转移的神经巩固

On Neural Consolidation for Transfer in Reinforcement Learning

论文作者

Guillet, Valentin, Wilson, Dennis G., Aguilar-Melchor, Carlos, Rachelson, Emmanuel

论文摘要

尽管转移学习被认为是深入增强学习的里程碑,但其背后的机制仍然很少理解。特别是,预测是否可以在两个给定任务之间传递知识仍然是一个尚未解决的问题。在这项工作中,我们探讨了将网络蒸馏用作特征提取方法的使用,以更好地了解发生转移的上下文。值得注意的是,我们表明蒸馏不会阻止知识转移,包括从多个任务转移到新任务时,我们将这些结果与不提前蒸馏的转移进行了比较。由于不同游戏之间的差异,我们将工作重点放在Atari基准上,这也是由于它们在视觉特征方面的相似之处。

Although transfer learning is considered to be a milestone in deep reinforcement learning, the mechanisms behind it are still poorly understood. In particular, predicting if knowledge can be transferred between two given tasks is still an unresolved problem. In this work, we explore the use of network distillation as a feature extraction method to better understand the context in which transfer can occur. Notably, we show that distillation does not prevent knowledge transfer, including when transferring from multiple tasks to a new one, and we compare these results with transfer without prior distillation. We focus our work on the Atari benchmark due to the variability between different games, but also to their similarities in terms of visual features.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源