论文标题

通过无碰撞正则化的贝叶斯优化的学习表示

Learning Representation for Bayesian Optimization with Collision-free Regularization

论文作者

Zhang, Fengxue, Nord, Brian, Chen, Yuxin

论文摘要

贝叶斯优化受到具有大规模,高维和非平稳特征的数据集的挑战,这些特征在实际情况下很常见。最近的著作试图通过在经典高斯过程之前应用神经网络来处理这种输入,以学习潜在的表示。我们表明,即使有了适当的网络设计,这种学识渊博的表示形式通常会导致潜在空间中的碰撞:两个观测值显着不同的潜在潜在空间碰撞,从而导致优化性能退化。为了解决这个问题,我们提出了一个有效的深贝叶斯优化框架Loco,它采用新颖的正规器来减少学习的潜在空间中的碰撞,并鼓励从潜在空间到客观值的映射到Lipschitz的连续。与目标空间距离相比,Loco接收成对的数据点,并惩罚潜在空间太近的人。通过检查这种基于动态的贝叶斯优化算法的遗憾,我们为Loco提供了严格的理论依据,其中神经网络与正常器进行了迭代重新训练。我们的经验结果证明了Loco对几个合成和现实基准的贝叶斯优化任务的有效性。

Bayesian optimization has been challenged by datasets with large-scale, high-dimensional, and non-stationary characteristics, which are common in real-world scenarios. Recent works attempt to handle such input by applying neural networks ahead of the classical Gaussian process to learn a latent representation. We show that even with proper network design, such learned representation often leads to collision in the latent space: two points with significantly different observations collide in the learned latent space, leading to degraded optimization performance. To address this issue, we propose LOCo, an efficient deep Bayesian optimization framework which employs a novel regularizer to reduce the collision in the learned latent space and encourage the mapping from the latent space to the objective value to be Lipschitz continuous. LOCo takes in pairs of data points and penalizes those too close in the latent space compared to their target space distance. We provide a rigorous theoretical justification for LOCo by inspecting the regret of this dynamic-embedding-based Bayesian optimization algorithm, where the neural network is iteratively retrained with the regularizer. Our empirical results demonstrate the effectiveness of LOCo on several synthetic and real-world benchmark Bayesian optimization tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源