(2022) Masked Training of Neural Networks with Partial Gradients.
| 
 | Text 2106.08895.pdf - Accepted Version Download (4MB) | Preview | 
Abstract
State-of-the-art training algorithms for deep learning models are based on stochastic gradient descent (SGD). Recently, many variations have been explored: perturbing parameters for better accuracy (such as in Extragradient), limiting SGD updates to a subset of parameters for increased efficiency (such as meProp) or a combination of both (such as Dropout). However, the convergence of these methods is often not studied in theory. We propose a unified theoretical framework to study such SGD variants -- encompassing the aforementioned algorithms and additionally a broad variety of methods used for communication efficient training or model compression. Our insights can be used as a guide to improve the efficiency of such methods and facilitate generalization to new applications. As an example, we tackle the task of jointly training networks, a version of which (limited to sub-networks) is used to create Slimmable Networks. By training a low-rank Transformer jointly with a standard one we obtain superior performance than when it is trained separately.
| Item Type: | Conference or Workshop Item (A Paper) (Paper) | 
|---|---|
| Divisions: | Sebastian Stich (SS) | 
| Conference: | AISTATS International Conference on Artificial Intelligence and Statistics | 
| Depositing User: | Sebastian Stich | 
| Date Deposited: | 05 Apr 2022 09:03 | 
| Last Modified: | 05 Apr 2022 09:03 | 
| Primary Research Area: | NRA2: Reliable Security Guarantees | 
| URI: | https://publications.cispa.saarland/id/eprint/3600 | 
Actions
Actions (login required)
|  | View Item | 
