论文标题

用于数据有效机器学习的自适应二阶核心

Adaptive Second Order Coresets for Data-efficient Machine Learning

论文作者

Pooladzandi, Omead, Davini, David, Mirzasoleiman, Baharan

论文摘要

大量数据集上的培训机学习模型会带来大量的计算成本。为了减轻这种成本,已经持续努力开发数据有效的培训方法,这些方法可以仔细选择培训示例的子集,以概括为完整的培训数据。但是,现有方法在为在提取子集训练的模型的质量提供理论保证方面受到限制,并且在实践中的表现可能差。我们提出了Adacore,该方法利用数据的几何形状提取培训示例的子集以进行有效的机器学习。我们方法背后的关键思想是通过对Hessian的指数平均估算值动态近似损失函数的曲率,以选择加权子集(核心),这些子集(核心)提供了与Hessian的完整梯度预处理的紧密近似。我们证明,严格保证了应用于Adacore选择的子集的各种一阶和二阶方法的收敛性。我们的广泛实验表明,与基准相比,Adacore提取了具有较高质量的核心,并加快了对凸面和非凸机学习模型的训练,例如逻辑回归和神经网络,超过2.9倍,超过2.9倍,而随机子集对4.5倍超过4.5倍。

Training machine learning models on massive datasets incurs substantial computational costs. To alleviate such costs, there has been a sustained effort to develop data-efficient training methods that can carefully select subsets of the training examples that generalize on par with the full training data. However, existing methods are limited in providing theoretical guarantees for the quality of the models trained on the extracted subsets, and may perform poorly in practice. We propose AdaCore, a method that leverages the geometry of the data to extract subsets of the training examples for efficient machine learning. The key idea behind our method is to dynamically approximate the curvature of the loss function via an exponentially-averaged estimate of the Hessian to select weighted subsets (coresets) that provide a close approximation of the full gradient preconditioned with the Hessian. We prove rigorous guarantees for the convergence of various first and second-order methods applied to the subsets chosen by AdaCore. Our extensive experiments show that AdaCore extracts coresets with higher quality compared to baselines and speeds up training of convex and non-convex machine learning models, such as logistic regression and neural networks, by over 2.9x over the full data and 4.5x over random subsets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源