This paper presents a rigorous analysis of convergence rates for gradient-based learning in differentiable games, focusing on parameter-space Lyapunov contraction. It unifies the analysis of various gradient descent methods and derives explicit convergence rates for different game classes, including potential games, zero-sum games, and general-sum games.
Key findings
Establishes novel convergence guarantees by constructing Lyapunov functions in the parameter space of learning dynamics.
Unifies analysis of simultaneous gradient descent, competitive gradient descent, and optimistic gradient methods.
Derives explicit convergence rates for several classes of games, proving linear convergence for strongly monotone games.
Characterizes conditions for asymptotic convergence in bilinear cases despite rotational dynamics.
Limitations & open questions
Analysis primarily focuses on differentiable games and may not extend to non-differentiable settings.
The practical applicability to GAN training needs further exploration in more complex scenarios.