Exploring redundancy in neural networks: pruning by genetic algorithm & filter energy

Loading...
Thumbnail Image
Date
2018
Authors
Janjic, Sasa
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Much work has been done on making convolutional models larger and more robust, but recent works have shown there is significant redundancy, suggesting that they are vastly more complex than necessary. The goal of this thesis is to explore the degree to which the representational redundancy can be reduced. I contribute a weight pruning genetic algorithm, an energy-based filter pruning algorithm, and provide insights on model compression and structure. Evolved weight pruning of MNIST trained multilayer perceptrons and convolutional networks showed that in some cases 72.4% and 89.6% of layer parameters can be pruned without retraining, and yield improvements in test set accuracy. Energy-based filter pruning showed that ImageNet trained VGG and ResNet models also exhibit significant redundancy, with VGG layers incurring an average 3.2% loss in accuracy after 9.83% compression, and ResNet layers incurring an 3.30% loss after 87.94% compression for over one–third of the layers.
Description
Keywords
Neural networks, Genetic Algorithm, Compression, Pruning, Energy, MLP, CNN, VGG, ResNet
Citation