论文标题
贝叶斯通过Q指数过程学习
Bayesian Learning via Q-Exponential Process
论文作者
论文摘要
正则化是优化,统计和机器学习中最基本的主题之一。要获得稀疏,以估算\ Mathbb {r}^d $的参数$ u \,通常将$ \ ell_q $罚款项,$ \ vert u \ vert_q $添加到目标函数中。与此类$ \ ell_q $惩罚相对应的概率分布是什么?当我们在l^q $中建模$ u \时,正确的随机过程对应于$ \ vert u \ vert_q $?这对于对大尺寸对象进行统计建模很重要,例如图像,并以保留确定性的惩罚,例如图像中的边缘。在这项工作中,我们将$ q $ - 指定分布(与密度成正比)$ \ exp {( - \ frac {1} {2} {2} | u |^q)} $ to $ q $ q $ - expentential(q-ep)过程,该过程与$ l_q $ l_q $ junderiations的$ l_q $正同步。关键步骤是通过从大型的椭圆轮廓分布家族中进行选择来指定一致的多元$ Q $ - 指数分布。这项工作与BESOV过程密切相关,BESOV过程通常由扩展的系列定义。 Q-EP可以被视为BESOV过程的定义,具有明确的概率公式,并直接控制相关长度。从贝叶斯的角度来看,Q-EP比常用的高斯流程(GP)提供了更清晰的罚款($ Q <2 $)功能的灵活先验。我们比较了GP,BESOV和Q-EP在建模功能数据,重建图像和解决反问题时比较,并证明了我们提出的方法的优势。
Regularization is one of the most fundamental topics in optimization, statistics and machine learning. To get sparsity in estimating a parameter $u\in\mathbb{R}^d$, an $\ell_q$ penalty term, $\Vert u\Vert_q$, is usually added to the objective function. What is the probabilistic distribution corresponding to such $\ell_q$ penalty? What is the correct stochastic process corresponding to $\Vert u\Vert_q$ when we model functions $u\in L^q$? This is important for statistically modeling large dimensional objects, e.g. images, with penalty to preserve certainty properties, e.g. edges in the image. In this work, we generalize the $q$-exponential distribution (with density proportional to) $\exp{(- \frac{1}{2}|u|^q)}$ to a stochastic process named $Q$-exponential (Q-EP) process that corresponds to the $L_q$ regularization of functions. The key step is to specify consistent multivariate $q$-exponential distributions by choosing from a large family of elliptic contour distributions. The work is closely related to Besov process which is usually defined by the expanded series. Q-EP can be regarded as a definition of Besov process with explicit probabilistic formulation and direct control on the correlation length. From the Bayesian perspective, Q-EP provides a flexible prior on functions with sharper penalty ($q<2$) than the commonly used Gaussian process (GP). We compare GP, Besov and Q-EP in modeling functional data, reconstructing images, and solving inverse problems and demonstrate the advantage of our proposed methodology.