Skip to main content

Research Repository

Advanced Search

Artificial neural networks training acceleration through network science strategies

Cavallaro, Lucia; Bagdasar, Ovidiu; De Meo, Pasquale; Fiumara, Giacomo; Liotta, Antonio

Authors

Lucia Cavallaro

Ovidiu Bagdasar

Pasquale De Meo

Giacomo Fiumara

Antonio Liotta



Abstract

The development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently) where a million-node ANN was trained on non-specialized laptops. Encouraged by those results, our study is now focused on some tunable parameters, to pursue a further acceleration effect. We show that, although optimal parameter tuning is unfeasible, due to the high non-linearity of ANN problems, we can actually come up with a set of useful guidelines that lead to speed-ups in practical cases. We find that significant reductions in execution time can generally be achieved by setting the revised fraction parameter (ζ) to relatively low values.

Journal Article Type Article
Online Publication Date Sep 9, 2020
Publication Date 2020-12
Deposit Date Sep 16, 2020
Publicly Available Date Sep 16, 2020
Journal Soft Computing
Print ISSN 1432-7643
Electronic ISSN 1433-7479
Publisher BMC
Peer Reviewed Peer Reviewed
Volume 24
DOI https://doi.org/10.1007/s00500-020-05302-y
Keywords Network science, Artificial neural networks, Multilayer perceptron, Revise phase
Public URL http://researchrepository.napier.ac.uk/Output/2685966

Files




You might also like



Downloadable Citations