Marcos Lupión
Accelerating neural network architecture search using multi-GPU high-performance computing
Lupión, Marcos; Cruz, N. C.; Sanjuan, Juan F.; Paechter, Ben; Ortigosa, Pilar M.
Authors
Abstract
Neural networks stand out from artificial intelligence because they can complete challenging tasks, such as image classification. However, designing a neural network for a particular problem requires experience and tedious trial and error. Automating this process defines a research field usually relying on population-based meta-heuristics. This kind of optimizer generally needs numerous function evaluations, which are computationally demanding in this context as they involve building, training, and evaluating different neural networks. Fortunately, these algorithms are also well suited for parallel computing. This work describes how the teaching–learning-based optimization algorithm has been adapted for designing neural networks exploiting a multi-GPU high-performance computing environment. The optimizer, not applied before for this purpose up to the authors’ knowledge, has been selected because it lacks specific parameters and is compatible with large-scale optimization. Thus, its configuration does not result in another problem and could design architectures with many layers. The parallelization scheme is decoupled from the optimizer. It can be seen as an external evaluation service managing multiple GPUs for promising neural network designs, even at different machines, and multiple CPU’s for low-performing solutions. This strategy has been tested in designing a neural network for image classification based on the CIFAR-10 dataset. The architectures found outperform human designs, and the sequential process is accelerated 4.2 times with 4 GPUs and 96 cores thanks to parallelization, being the ideal speed up 4.39 in this case.
Citation
Lupión, M., Cruz, N. C., Sanjuan, J. F., Paechter, B., & Ortigosa, P. M. (2023). Accelerating neural network architecture search using multi-GPU high-performance computing. Journal of Supercomputing, 79, 7609-7625. https://doi.org/10.1007/s11227-022-04960-z
Journal Article Type | Article |
---|---|
Acceptance Date | Nov 16, 2022 |
Online Publication Date | Dec 1, 2022 |
Publication Date | 2023-05 |
Deposit Date | Nov 21, 2022 |
Publicly Available Date | Dec 2, 2023 |
Journal | Journal of Supercomputing |
Print ISSN | 0920-8542 |
Publisher | Springer |
Peer Reviewed | Peer Reviewed |
Volume | 79 |
Pages | 7609-7625 |
DOI | https://doi.org/10.1007/s11227-022-04960-z |
Keywords | Artificial neural networks, Neural network design, HPC, TLBO, Multi-GPU |
Public URL | http://researchrepository.napier.ac.uk/Output/2963070 |
Files
Accelerating Neural Network Architecture Search Using Multi-GPU High-performance Computing (accepted version)
(469 Kb)
PDF
You might also like
A Cross-Domain Method for Generation of Constructive and Perturbative Heuristics
(2021)
Book Chapter
A Lifelong Learning Hyper-heuristic Method for Bin Packing
(2015)
Journal Article
Introduction to the special section on pervasive adaptation
(2012)
Journal Article
Parallelization of the nearest-neighbour search and the cross-validation error evaluation for the kernel weighted k-nn algorithm applied to large data dets in matlab
(2009)
Presentation / Conference Contribution