Ayouaz, Rayan
[UCL]
Dupont, Pierre
[UCL]
Over the years, there has been an increase in importance regarding the procedure of selecting features in machine learning models. While the number of applications of deep learning are multiplying in a wide variety of disciplines, the need for interpretability grows as well, asked by experts who are confronted to the use of deep learning in their works. The intrinsic way that feature selection can prevent overfitting of statistical models by taking into consideration relevant features only can be viewed as a really important step for regularization and is probably one of the most powerful option to really make a model more accurate and relevant to model reality. In another perspective, feature selection can be used in a more pragmatic way to reduce dimensionality of a problem as part of a preprocessing procedure. In this work, we will mainly focus on embedded feature selection through deep models. Such models refer to deep neural networks achieving feature selection during the training process. In most cases, these models are simple to illustrate and intuitive to understand. These methods are still suffering from a lack of theoretical results and there is then still a lot of work and experiments to make before such structures will be integrate to state-of-the-art models. We will observe how these models behave with different tuning and compare them with the expectation that some of them will lead to a significant reducing of the dimensionality while maintaining the performance of the models.
Bibliographic reference |
Ayouaz, Rayan. Deep feature selection. Ecole polytechnique de Louvain, Université catholique de Louvain, 2021. Prom. : Dupont, Pierre. |
Permanent URL |
http://hdl.handle.net/2078.1/thesis:32970 |