Skip to Main content Skip to Navigation
New interface
Conference papers

THE RISE OF THE LOTTERY HEROES: WHY ZERO-SHOT PRUNING IS HARD

Enzo Tartaglione 1, 2, 3 
Abstract : Recent advances in deep learning optimization showed that just a subset of parameters are really necessary to successfully train a model. Potentially, such a discovery has broad impact from the theory to application; however, it is known that finding these trainable sub-network is a typically costly process. This inhibits practical applications: can the learned sub-graph structures in deep learning models be found at training time? In this work we explore such a possibility, observing and motivating why common approaches typically fail in the extreme scenarios of interest, and proposing an approach which potentially enables training with reduced computational effort. The experiments on either challenging architectures and datasets suggest the algorithmic accessibility over such a computational gain, and in particular a trade-off between accuracy achieved and training complexity deployed emerges.
Document type :
Conference papers
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03766740
Contributor : Enzo Tartaglione Connect in order to contact the contributor
Submitted on : Thursday, September 1, 2022 - 1:40:52 PM
Last modification on : Wednesday, September 14, 2022 - 3:52:34 AM
Long-term archiving on: : Friday, December 2, 2022 - 6:27:46 PM

File

ICIP_22_LotteryHeroes-1.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03766740, version 1
  • ARXIV : 2207.09455

Collections

Citation

Enzo Tartaglione. THE RISE OF THE LOTTERY HEROES: WHY ZERO-SHOT PRUNING IS HARD. IEEE International Conference on Image Processing (ICIP 22), Oct 2022, Bordeaux, France. ⟨hal-03766740⟩

Share

Metrics

Record views

40

Files downloads

1