On the Convergence of Alternating Gradient Descent–Ascent in Bilinear Games
Overview
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors prove that alternating gradient descent-ascent achieves an O(1/T) convergence rate in bilinear games with an interior Nash equilibrium using a constant stepsize. This is the first result showing that alternation improves over simultaneous GDA in the constrained setting.
The authors establish a local O(1/T) convergence rate for AltGDA in general bilinear games without requiring an interior equilibrium. The constant stepsize used does not depend on game-specific parameters.
The authors develop a PEP-based framework that simultaneously optimizes both the stepsize and worst-case convergence rate for AltGDA. This is the first instance of stepsize optimization for performance estimation problems involving primal-dual algorithms with linear operators.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
[1] On the Convergence of Alternating Gradient Descent-Ascent in Bilinear Games PDF
[3] Near-optimal Local Convergence of Alternating Gradient Descent-Ascent for Minimax Optimization PDF
[4] Fundamental benefit of alternating updates in minimax optimization PDF
Contribution Analysis
Detailed comparisons for each claimed contribution
O(1/T) ergodic convergence rate for AltGDA with interior Nash equilibrium
The authors prove that alternating gradient descent-ascent achieves an O(1/T) convergence rate in bilinear games with an interior Nash equilibrium using a constant stepsize. This is the first result showing that alternation improves over simultaneous GDA in the constrained setting.
Local O(1/T) convergence rate with game-independent stepsize
The authors establish a local O(1/T) convergence rate for AltGDA in general bilinear games without requiring an interior equilibrium. The constant stepsize used does not depend on game-specific parameters.
[1] On the Convergence of Alternating Gradient Descent-Ascent in Bilinear Games PDF
[9] Competitive gradient descent PDF
[12] Optimistic Gradient Descent Ascent in Zero-Sum and General-Sum Bilinear Games PDF
[15] Convergence of gradient methods on bilinear zero-sum games PDF
[18] A Robust Framework for Analyzing Gradient-Based Dynamics in Bilinear Games PDF
[34] Linear Last-iterate Convergence in Constrained Saddle-point Optimization PDF
[45] Finite-time last-iterate convergence for learning in multi-player games PDF
[46] Accelerated primal-dual gradient method for smooth and convex-concave saddle-point problems with bilinear coupling PDF
[47] Recursive Reasoning in Minimax Games: A Level Gradient Play Method PDF
[48] A Descent-based method on the Duality Gap for solving zero-sum games PDF
Performance estimation programming framework for joint stepsize and convergence rate optimization
The authors develop a PEP-based framework that simultaneously optimizes both the stepsize and worst-case convergence rate for AltGDA. This is the first instance of stepsize optimization for performance estimation problems involving primal-dual algorithms with linear operators.