论文标题

剪辑模型是一个有效的持续学习者

CLIP model is an Efficient Continual Learner

论文作者

Thengane, Vishal, Khan, Salman, Hayat, Munawar, Khan, Fahad

论文摘要

持续的学习设置旨在随着时间的流逝学习新任务,而不会忘记以前的任务。文献报告了几项重大努力,以解决此问题,而没有访问以前的任务数据。在此类努力中,典型的解决方案提供了涉及内存重播,知识蒸馏,模型正则化和动态网络扩展的复杂技术。所得的方法在每个学习任务,专用的内存需求和特定于设置的设计选择下都具有重新培训成本。在这项工作中,我们表明,冷冻的剪辑(对比性语言图像预处理)模型提供了令人惊讶的持续学习表现,而无需进行任何微调(零拍评估)。我们在各种环境下评估剪辑,包括五个流行基准(Imagenet-100&1K,Core50,Cifar-100和Tinyimagenet)上的类临界,域内和任务不合时宜的增量学习。没有任何铃铛和哨子,剪辑模型在大多数设置中的最先进的持续学习方法都优于最先进的方法。我们通过使用简单的提示模板改变文本输入来显示对剪辑模型性能的影响。据我们所知,这是在不断环境中报告剪辑零击性能的第一项工作。我们主张使用这种强大而又令人尴尬的简单基线,以在持续学习任务中进行比较。

The continual learning setting aims to learn new tasks over time without forgetting the previous ones. The literature reports several significant efforts to tackle this problem with limited or no access to previous task data. Among such efforts, typical solutions offer sophisticated techniques involving memory replay, knowledge distillation, model regularization, and dynamic network expansion. The resulting methods have a retraining cost at each learning task, dedicated memory requirements, and setting-specific design choices. In this work, we show that a frozen CLIP (Contrastive Language-Image Pretraining) model offers astounding continual learning performance without any fine-tuning (zero-shot evaluation). We evaluate CLIP under a variety of settings including class-incremental, domain-incremental and task-agnostic incremental learning on five popular benchmarks (ImageNet-100 & 1K, CORe50, CIFAR-100, and TinyImageNet). Without any bells and whistles, the CLIP model outperforms the state-of-the-art continual learning approaches in the majority of the settings. We show the effect on the CLIP model's performance by varying text inputs with simple prompt templates. To the best of our knowledge, this is the first work to report the CLIP zero-shot performance in a continual setting. We advocate the use of this strong yet embarrassingly simple baseline for future comparisons in the continual learning tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源