Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms - Département Image, Données, Signal Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms

Résumé

Understanding generalization in modern machine learning settings has been one of the major challenges in statistical learning theory. In this context, recent years have witnessed the development of various generalization bounds suggesting different complexity notions such as the mutual information between the data sample and the algorithm output, compressibility of the hypothesis space, and the fractal dimension of the hypothesis space. While these bounds have illuminated the problem at hand from different angles, their suggested complexity notions might appear seemingly unrelated, thereby restricting their high-level impact. In this study, we prove novel generalization bounds through the lens of rate-distortion theory, and explicitly relate the concepts of mutual information, compressibility, and fractal dimensions in a single mathematical framework. Our approach consists of (i) defining a generalized notion of compressibility by using source coding concepts, and (ii) showing that the 'compression error rate' can be linked to the generalization error both in expectation and with high probability. We show that in the 'lossless compression' setting, we recover and improve existing mutual information-based bounds, whereas a 'lossy compression' scheme allows us to link generalization to the rate-distortion dimension-a particular notion of fractal dimension. Our results bring a more unified perspective on generalization and open up several future research directions.
Fichier principal
Vignette du fichier
sefidgaran22a.pdf (533.94 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03759597 , version 1 (24-08-2022)

Identifiants

  • HAL Id : hal-03759597 , version 1

Citer

Milad Sefidgaran, Amin Gohari, Gael Richard, Umut Şimşekli. Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms. COLT 2022 - 35th Annual Conference on Learning Theory, Jul 2022, London, United Kingdom. ⟨hal-03759597⟩
108 Consultations
88 Téléchargements

Partager

Gmail Facebook X LinkedIn More