S. Scardapane
Group sparse regularization for deep neural networks
Scardapane, S.; Comminiello, D.; Hussain, A.; Uncini, A.
Abstract
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are traditionally dealt with separately, we propose an efficient regularized formulation enabling their simultaneous parallel execution, using standard optimization routines. Specifically, we extend the group Lasso penalty, originally proposed in the linear regression literature, to impose group-level sparsity on the network’s connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We carry out an extensive experimental evaluation, in comparison with classical weight decay and Lasso penalties, both on a toy dataset for handwritten digit recognition, and multiple realistic mid-scale classification benchmarks. Comparative results demonstrate the potential of our proposed sparse group Lasso penalty in producing extremely compact networks, with a significantly lower number of input features, with a classification accuracy which is equal or only slightly inferior to standard regularization terms.
Citation
Scardapane, S., Comminiello, D., Hussain, A., & Uncini, A. (2017). Group sparse regularization for deep neural networks. Neurocomputing, 241, 81-89. https://doi.org/10.1016/j.neucom.2017.02.029
Journal Article Type | Article |
---|---|
Acceptance Date | Feb 7, 2017 |
Online Publication Date | Feb 10, 2017 |
Publication Date | Jun 7, 2017 |
Deposit Date | Sep 5, 2019 |
Publicly Available Date | Sep 5, 2019 |
Journal | Neurocomputing |
Print ISSN | 0925-2312 |
Publisher | Elsevier |
Peer Reviewed | Peer Reviewed |
Volume | 241 |
Pages | 81-89 |
DOI | https://doi.org/10.1016/j.neucom.2017.02.029 |
Keywords | Deep networks, Group sparsity, Pruning, Feature selection |
Public URL | http://researchrepository.napier.ac.uk/Output/1792507 |
Files
Group Sparse Regularization For Deep Neural Networks
(578 Kb)
PDF
Licence
http://creativecommons.org/licenses/by-nc-nd/4.0/
Copyright Statement
Accepted refereed manuscript of: Scardapane S, Comminiello D, Hussain A & Uncini A (2017) Group Sparse Regularization for Deep Neural Networks, Neurocomputing, 241, pp. 81-89. DOI: 10.1016/j.neucom.2017.02.029
© 2017, Elsevier. Licensed under the Creative Commons Attribution-
NonCommercial-NoDerivatives 4.0 International
http://creativecommons.org/licenses/by-nc-nd/4.0/
You might also like
MTFDN: An image copy‐move forgery detection method based on multi‐task learning
(2024)
Journal Article
Transition-aware human activity recognition using an ensemble deep learning framework
(2024)
Journal Article
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search