Accelerating neural network architecture search using multi-GPU high-performance computing
Lupión, Marcos; Cruz, N. C.; Sanjuan, Juan F.; Paechter, Ben; Ortigosa, Pilar M.
N. C. Cruz
Juan F. Sanjuan
Prof Ben Paechter B.Paechter@napier.ac.uk
Pilar M. Ortigosa
Neural networks stand out from artificial intelligence because they can complete challenging tasks, such as image classification. However, designing a neural network for a particular problem requires experience and tedious trial and error. Automating this process defines a research field usually relying on population-based meta-heuristics. This kind of optimizer generally needs numerous function evaluations, which are computationally demanding in this context as they involve building, training, and evaluating different neural networks. Fortunately, these algorithms are also well suited for parallel computing. This work describes how the teaching–learning-based optimization algorithm has been adapted for designing neural networks exploiting a multi-GPU high-performance computing environment. The optimizer, not applied before for this purpose up to the authors’ knowledge, has been selected because it lacks specific parameters and is compatible with large-scale optimization. Thus, its configuration does not result in another problem and could design architectures with many layers. The parallelization scheme is decoupled from the optimizer. It can be seen as an external evaluation service managing multiple GPUs for promising neural network designs, even at different machines, and multiple CPU’s for low-performing solutions. This strategy has been tested in designing a neural network for image classification based on the CIFAR-10 dataset. The architectures found outperform human designs, and the sequential process is accelerated 4.2 times with 4 GPUs and 96 cores thanks to parallelization, being the ideal speed up 4.39 in this case.
Lupión, M., Cruz, N. C., Sanjuan, J. F., Paechter, B., & Ortigosa, P. M. (in press). Accelerating neural network architecture search using multi-GPU high-performance computing. Journal of Supercomputing, https://doi.org/10.1007/s11227-022-04960-z
|Journal Article Type||Article|
|Acceptance Date||Nov 16, 2022|
|Online Publication Date||Dec 1, 2022|
|Deposit Date||Nov 21, 2022|
|Publicly Available Date||Dec 2, 2023|
|Journal||Journal of Supercomputing|
|Peer Reviewed||Peer Reviewed|
|Keywords||Artificial neural networks, Neural network design, HPC, TLBO, Multi-GPU|
This file is under embargo until Dec 2, 2023 due to copyright reasons.
Contact email@example.com to request a copy for personal use.
You might also like
A Cross-Domain Method for Generation of Constructive and Perturbative Heuristics
Evolving planar mechanisms for the conceptual stage of mechanical design
2-Dimensional Outline Shape Representation for Generative Design with Evolutionary Algorithms