论文标题

Mononerf:从单眼视频中学习一个可推广的动态辐射场

MonoNeRF: Learning a Generalizable Dynamic Radiance Field from Monocular Videos

论文作者

Tian, Fengrui, Du, Shaoyi, Duan, Yueqi

论文摘要

在本文中,我们针对从单眼视频中学习可概括的动态辐射场的问题。与大多数基于多个视图的现有NERF方法不同,单眼视频仅在每个时间戳上包含一个视图,从而在估算点特征和场景流中沿视图方向歧义。先前的研究(例如Dynnerf disabiate Point a Point a -Toperal编码)的特征,这是不可传输的,并且严重限制了概括能力。结果,这些方法必须针对每个场景培训一个独立的模型,并在现实世界应用中增加单眼视频时遭受巨大的计算成本。为了解决这个问题,我们建议Mononerf同时学习点特征和场景流,并在框架上使用点轨迹和特征对应关系约束。更具体地说,我们学习一个隐式速度场,可以从具有神经ode的时间特征估算点轨迹,然后是基于流的特征聚合模块,以沿点轨迹获得空间特征。我们以端到端的方式共同优化时间和空间特征。实验表明,我们的Mononerf能够从多个场景中学习,并支持新的应用程序,例如场景编辑,看不见的框架合成和快速的新颖场景适应。代码可在https://github.com/tianfr/mononerf上找到。

In this paper, we target at the problem of learning a generalizable dynamic radiance field from monocular videos. Different from most existing NeRF methods that are based on multiple views, monocular videos only contain one view at each timestamp, thereby suffering from ambiguity along the view direction in estimating point features and scene flows. Previous studies such as DynNeRF disambiguate point features by positional encoding, which is not transferable and severely limits the generalization ability. As a result, these methods have to train one independent model for each scene and suffer from heavy computational costs when applying to increasing monocular videos in real-world applications. To address this, We propose MonoNeRF to simultaneously learn point features and scene flows with point trajectory and feature correspondence constraints across frames. More specifically, we learn an implicit velocity field to estimate point trajectory from temporal features with Neural ODE, which is followed by a flow-based feature aggregation module to obtain spatial features along the point trajectory. We jointly optimize temporal and spatial features in an end-to-end manner. Experiments show that our MonoNeRF is able to learn from multiple scenes and support new applications such as scene editing, unseen frame synthesis, and fast novel scene adaptation. Codes are available at https://github.com/tianfr/MonoNeRF.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源