论文标题
一种新的Minimax定理,用于随机算法
A New Minimax Theorem for Randomized Algorithms
论文作者
论文摘要
Yao(1977)著名的Minimax原则说,对于任何带有有限域的布尔值函数$ f $,与$ f $的域相比,将$ f $ for $ f $ for $ f $ from $ f $ f $ f $ f $ f $ f $ for计算出$ f $ to comput $ f $ to comput $ f $ to comporing $ f $ to comporing $ f $ for Orror $ for Orror $ for Offor $ f to porrort $ for porrests pusts worts Case case case inputs of to $ f $都很难。但是,值得注意的是,分布$μ$取决于目标误差级别$ε$:对有限误差的硬分布可能很容易解决小偏差,而对于小偏见,对于有限误差级别而言,对于小偏差,这是远远不足的。 在这项工作中,我们引入了一种新型的Minimax定理,该定理可以提供一个硬发行的$μ$,可一次适用于所有偏见水平。我们表明,这适用于随机查询复杂性,随机通信复杂性,一些随机电路模型,量子查询和通信复杂性,近似多项式程度和近似logrank。我们还证明了Impagliazzo的硬核引理的改进版本。 我们的证明依赖于使用Von Neumann的Minimax定理或线性编程二元性的经典方法的两项创新。首先,我们使用Sion的Minimax定理来证明代表算法成本和得分的双线性函数比率的最小定理。 其次,我们引入了一种新的方法来分析低偏置随机算法,通过将其视为“预测算法”,该算法通过适当的评分规则评估。随机算法的预测版本的预期得分似乎是分析算法偏置的一种更细粒度的方法。我们表明,这种预期的分数具有许多优雅的数学属性:例如,它们可以线性扩增,而不是四边形。我们预计预测算法将在未来的工作中找到使用,其中需要对小偏置算法进行细粒度分析。
The celebrated minimax principle of Yao (1977) says that for any Boolean-valued function $f$ with finite domain, there is a distribution $μ$ over the domain of $f$ such that computing $f$ to error $ε$ against inputs from $μ$ is just as hard as computing $f$ to error $ε$ on worst-case inputs. Notably, however, the distribution $μ$ depends on the target error level $ε$: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution $μ$ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzo's hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumann's minimax theorem or linear programming duality. First, we use Sion's minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as "forecasting algorithms" evaluated by a proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties: for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.