Physics-Informed Neural Networks (PINNs) are used for solving PDEs by embedding physical constraints into neural network training. This work derives task-specific scaling laws for PINNs, characterizing how generalization error scales with model parameters, training iterations, and data volume across various PDE families.
Key findings
PINN scaling deviates significantly from standard neural scaling laws due to the unique structure of physics-based loss functions.
Identified universal scaling exponents and task-specific correction factors through empirical analysis on a benchmark suite of 50+ PDE problems.
Validated scaling laws on out-of-distribution tasks and demonstrated utility for compute-optimal training allocation.
Limitations & open questions
The study focuses on PINNs and may not generalize to other types of neural networks.
The derived scaling laws are based on a specific set of PDEs and may not apply to all scientific machine learning tasks.