论文标题

梯度起源网络

Gradient Origin Networks

论文作者

Bond-Taylor, Sam, Willcocks, Chris G.

论文摘要

本文提出了一种新型的生成模型,该模型能够在没有编码器的情况下快速学习潜在表示。这是使用经验贝叶斯来计算后验的期望来实现的,后者是通过用零来初始初始化潜在向量的,然后将数据类似于该零向量作为新的潜在点的log-fikelihoos的梯度。该方法具有与自动编码器相似的特征,但具有更简单的体系结构,并且在允许采样的变异自动编码器等效物中进行了证明。这还允许隐式表示网络学习隐式函数的空间,而无需超级net工作,保留其在数据集中的表示优势。实验表明,所提出的方法收敛的速度更快,重建误差明显低于自动编码器,同时需要一半的参数。

This paper proposes a new type of generative model that is able to quickly learn a latent representation without an encoder. This is achieved using empirical Bayes to calculate the expectation of the posterior, which is implemented by initialising a latent vector with zeros, then using the gradient of the log-likelihood of the data with respect to this zero vector as new latent points. The approach has similar characteristics to autoencoders, but with a simpler architecture, and is demonstrated in a variational autoencoder equivalent that permits sampling. This also allows implicit representation networks to learn a space of implicit functions without requiring a hypernetwork, retaining their representation advantages across datasets. The experiments show that the proposed method converges faster, with significantly lower reconstruction error than autoencoders, while requiring half the parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源