(2022) Decentralized Local Stochastic Extra-Gradient for Variational Inequalities.
|
Text
2106.08315.pdf - Submitted Version Download (952kB) | Preview |
Abstract
We consider distributed stochastic variational inequalities (VIs) on unbounded domains with the problem data that is heterogeneous (non-IID) and distributed across many devices. We make a very general assumption on the computational network that, in particular, covers the settings of fully decentralized calculations with time-varying networks and centralized topologies commonly used in Federated Learning. Moreover, multiple local updates on the workers can be made for reducing the communication frequency between workers. We extend the stochastic extragradient method to this very general setting and theoretically analyze its convergence rate in the strongly monotone, monotone, and non-monotone settings when a Minty solution exists. The provided rates explicitly exhibit the dependence on network characteristics (e.g., mixing time), iteration counter, data heterogeneity, variance, number of devices, and other standard parameters. As a special case, our method and analysis apply to distributed stochastic saddle-point problems (SPP), e.g., to training Deep Generative Adversarial Networks (GANs) for which decentralized training has been reported to be extremely challenging. In experiments for decentralized training of GANs we demonstrate the effectiveness of our proposed approach.
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Sebastian Stich (SS) |
Conference: | NeurIPS Conference on Neural Information Processing Systems |
Depositing User: | Sebastian Stich |
Date Deposited: | 12 Oct 2022 18:30 |
Last Modified: | 12 Oct 2022 18:30 |
Primary Research Area: | NRA1: Trustworthy Information Processing |
URI: | https://publications.cispa.saarland/id/eprint/3801 |
Actions
Actions (login required)
View Item |