Options
On the turnpike to design of deep neural networks : explicit depth bounds
Citation Link: https://doi.org/10.15480/882.13658
Publikationstyp
Journal Article
Date Issued
2024-12-01
Sprache
English
TORE-DOI
Volume
30
Article Number
100290
Citation
IFAC Journal of Systems and Control 30: 100290 (2024-12-01)
Publisher DOI
Scopus ID
Publisher
Elsevier
It is well-known that the training of Deep Neural Networks (DNN) can be formalized in the language of optimal control. In this context, this paper leverages classical turnpike properties of optimal control problems to attempt a quantifiable answer to the question of how many layers should be considered in a DNN. The underlying assumption is that the number of neurons per layer—i.e., the width of the DNN—is kept constant. Pursuing a different route than the classical analysis of approximation properties of sigmoidal functions, we prove explicit bounds on the required depths of DNNs based on asymptotic reachability assumptions and a dissipativity-inducing choice of the regularization terms in the training problem. Numerical results obtained for the two spiral task data set for classification indicate that the proposed constructive estimates can provide non-conservative depth bounds.
Subjects
Artificial neural networks
Deep learning
Dissipativity
Machine learning
Turnpike properties
MLE@TUHH
DDC Class
003: Systems Theory
006: Special computer methods
519: Applied Mathematics, Probabilities
Publication version
publishedVersion
Loading...
Name
1-s2.0-S2468601824000518-main.pdf
Type
Main Article
Size
753.78 KB
Format
Adobe PDF