论文标题

PDEBENCH:科学机器学习的广泛基准

PDEBENCH: An Extensive Benchmark for Scientific Machine Learning

论文作者

Takamoto, Makoto, Praditia, Timothy, Leiteritz, Raphael, MacKinlay, Dan, Alesiani, Francesco, Pflüger, Dirk, Niepert, Mathias

论文摘要

近年来,基于机器学习的物理系统建模引起了人们的兴趣。尽管进展有些令人印象深刻,但仍然缺乏科学ML的基准,这些基准易于使用,但仍然具有挑战性,并且代表了广泛的问题。我们介绍了PDEBENCH,这是基于偏微分方程(PDE)的时间依赖性仿真任务的基准套件。 PDEBENCH包括代码和数据,以基准针对经典数值模拟和机器学习基线的新机器学习模型的性能。我们提出的一组基准问题贡献了以下独特的功能:(1)与现有基准相比,PDE范围更大,从相对常见的示例到更现实和更困难的问题; (2)与先前的工作相比,较大的现成数据集,包括多个模拟在大量的初始和边界条件以及PDE参数上运行; (3)使用流行的机器学习模型(FNO,U-NET,PINN,基于梯度的逆方法),具有用户友好的API的更可扩展的源代码,用于数据生成和基线结果。 PDEBENCH允许研究人员使用标准化的API自由扩展基准,并将新模型的性能与现有基线方法进行比较。我们还提出了新的评估指标,目的是在科学ML的背景下对学习方法有更全面的理解。通过这些指标,我们确定了对最近的ML方法具有挑战性的任务,并将这些任务作为对社区的未来挑战。该代码可在https://github.com/pdebench/pdebench上找到。

Machine learning-based modeling of physical systems has experienced increased interest in recent years. Despite some impressive progress, there is still a lack of benchmarks for Scientific ML that are easy to use but still challenging and representative of a wide range of problems. We introduce PDEBench, a benchmark suite of time-dependent simulation tasks based on Partial Differential Equations (PDEs). PDEBench comprises both code and data to benchmark the performance of novel machine learning models against both classical numerical simulations and machine learning baselines. Our proposed set of benchmark problems contribute the following unique features: (1) A much wider range of PDEs compared to existing benchmarks, ranging from relatively common examples to more realistic and difficult problems; (2) much larger ready-to-use datasets compared to prior work, comprising multiple simulation runs across a larger number of initial and boundary conditions and PDE parameters; (3) more extensible source codes with user-friendly APIs for data generation and baseline results with popular machine learning models (FNO, U-Net, PINN, Gradient-Based Inverse Method). PDEBench allows researchers to extend the benchmark freely for their own purposes using a standardized API and to compare the performance of new models to existing baseline methods. We also propose new evaluation metrics with the aim to provide a more holistic understanding of learning methods in the context of Scientific ML. With those metrics we identify tasks which are challenging for recent ML methods and propose these tasks as future challenges for the community. The code is available at https://github.com/pdebench/PDEBench.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源