From Model Complexity Reduction to Feature Selection in Deep Learning: a Regularization Story - Equipe Multimédia
Hdr Année : 2024

From Model Complexity Reduction to Feature Selection in Deep Learning: a Regularization Story

De la réduction de la complexité des modèles à la sélection des caractéristiques dans l'apprentissage profond : une histoire de régularisation

Résumé

In the last decade, the scientific community has witnessed the blooming and the massive exploitation of deep neural networks (DNNs). This is fueled by multiple factors: the inherent flexibility offered by DNNs to learn input-output functions from big data and the computation scaling-up in computational capability offered by computing devices, which grasped the attention of a growing research community that progressively further improves their performance. Such a tsunamic trend finds its (momentary) apex in foundation models: DNNs trained on a dementially large quantity of data, are able to extract rich features that adapt to a broad range of downstream tasks. Siding this enthusiasm in achieving generality for these models, multiple problems were encountered, especially when dealing with the model’s usability in resource-constrained environments, or when learning from biased data. In this work, an analysis of these two aspects will be conducted, also showing how the two are intrinsically linked. This work summarizes my research conducted after the PhD, divided mainly into two parts: - In the first part, model compression and efficiency will be treated, with a special emphasis on deep neural network pruning; - In the second part, model debiasing is treated, with openings also to privacy for DNN (intended as feature hiding); All along these, new research trends currently under study are suggested: the employment of adapters for efficient fine-tuning of a large pre-trained model, training with a selection of a subset of neurons to update, on-device learning with improved selection of a DNN to fine-tune, DNN depth reduction, and more. In all the presented approaches, the recurrent underlying theme, either in an implicit or explicit fashion, will be the design of regularization for DNNs.
Fichier principal
Vignette du fichier
HDR_merged.pdf (3.07 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

tel-04803096 , version 1 (25-11-2024)

Identifiants

  • HAL Id : tel-04803096 , version 1

Citer

Enzo Tartaglione. From Model Complexity Reduction to Feature Selection in Deep Learning: a Regularization Story. Computer Science [cs]. Institut Polytechnique de Paris, 2024. ⟨tel-04803096⟩
0 Consultations
0 Téléchargements

Partager

More