论文标题
无监督的几次学习通过深层laplacian eigenmaps
Unsupervised Few-shot Learning via Deep Laplacian Eigenmaps
论文作者
论文摘要
从少数示例中学习新任务仍然是机器学习的挑战。尽管最近的几次学习取得了进展,但大多数方法都依赖于标记的元训练数据的监督预处理或元学习,并且不能应用于未经标记的预处理数据的情况下。在这项研究中,我们通过深层的特征图提出了一种无监督的少数学习方法。我们的方法通过将相似的样本分组在一起,从未标记的数据中学习表示形式,并且可以通过随机步行在增强培训数据上直观地解释。我们通过分析表明,在没有明确比较正面和负样本之间,在无监督的学习中避免了无人看管的学习中的深层特征图。所提出的方法显着缩小了监督和无监督学习之间的性能差距。我们的方法还可以在线性评估协议下实现与当前最新的自我监督学习方法相当的性能。
Learning a new task from a handful of examples remains an open challenge in machine learning. Despite the recent progress in few-shot learning, most methods rely on supervised pretraining or meta-learning on labeled meta-training data and cannot be applied to the case where the pretraining data is unlabeled. In this study, we present an unsupervised few-shot learning method via deep Laplacian eigenmaps. Our method learns representation from unlabeled data by grouping similar samples together and can be intuitively interpreted by random walks on augmented training data. We analytically show how deep Laplacian eigenmaps avoid collapsed representation in unsupervised learning without explicit comparison between positive and negative samples. The proposed method significantly closes the performance gap between supervised and unsupervised few-shot learning. Our method also achieves comparable performance to current state-of-the-art self-supervised learning methods under linear evaluation protocol.