论文标题

Pergamo:单眼视频的个性化3D服装

PERGAMO: Personalized 3D Garments from Monocular Video

论文作者

Casado-Elvira, Andrés, Trinidad, Marc Comino, Casas, Dan

论文摘要

服装在数字人类中起着基本作用。当前的动画3D服装的方法主要基于现实的物理模拟,但是,它们通常遭受两个主要问题:高计算运行时成本,这阻碍了他们的发展;和模拟到现实的间隙,这阻碍了特定现实世界样品的合成。为了解决这两个问题,我们提出了Pergamo,这是一种数据驱动的方法,可以从单眼图像中学习3D服装的可变形模型。为此,我们首先引入了一种新颖的方法,以从单个图像中重建服装的3D几何形状,并使用它来构建单眼视频的衣服数据集。我们使用这些3D重建来训练回归模型,该模型可以准确预测服装如何与基础身体姿势的函数变形。我们表明,我们的方法能够产生与现实世界行为相匹配的服装动画,并概括从运动捕获数据集中提取的未见身体运动。

Clothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their development; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源