论文标题
通过辅助分布学习算法概括误差界限
Learning Algorithm Generalization Error Bounds via Auxiliary Distributions
论文作者
论文摘要
概括误差范围对于理解机器学习模型的工作方式至关重要。在这项工作中,我们提出了一种新颖的方法,即辅助分布方法,该方法会导致适用于监督学习场景的预期泛化错误的新上限。我们表明,在某些条件下,我们的一般上限可以专门针对涉及$α$ -Jensen-Shannon,$α$-Rényi($ 0 <α<1 $)的新界限($α$-Rényi($ 0 <α<1 $))的信息在随机变量建模集合集合和另一个随机变量的随机变量之间。我们基于$α$ -Jensen-Shannon信息的上限也是有限的。此外,我们演示了如何使用我们的辅助分配方法来得出在有监督的学习环境中某些学习算法过多风险的上限{\ blue和在监督学习算法中分布不匹配的情况下的概括性错误,其中分布不匹配为$α$ -Jensen $ -jensen $ -jensen $ -jensen $ -jensen $ -jensen $ -jensen $ -jensen $ -jensen $ -jensy $ -jensy $ -jennyi nynyi diver,分布。}我们还概述了我们提出的上限可能比其他早期上限更紧的条件。
Generalization error bounds are essential for comprehending how well machine learning models work. In this work, we suggest a novel method, i.e., the Auxiliary Distribution Method, that leads to new upper bounds on expected generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the $α$-Jensen-Shannon, $α$-Rényi ($0< α< 1$) information between a random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on $α$-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on excess risk of some learning algorithms in the supervised learning context {\blue and the generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distribution mismatch is modeled as $α$-Jensen-Shannon or $α$-Rényi divergence between the distribution of test and training data samples distributions.} We also outline the conditions for which our proposed upper bounds might be tighter than other earlier upper bounds.