Skip to main content

Research Repository

Advanced Search

Distributed Reservoir Computing with Sparse Readouts [Research Frontier]

Scardapane, Simone; Panella, Massimo; Comminiello, Danilo; Hussain, Amir; Uncini, Aurelio

Authors

Simone Scardapane

Massimo Panella

Danilo Comminiello

Aurelio Uncini



Abstract

In a network of agents, a widespread problem is the need to estimate a common underlying function starting from locally distributed measurements. Real-world scenarios may not allow the presence of centralized fusion centers, requiring the development of distributed, message-passing implementations of the standard machine learning training algorithms. In this paper, we are concerned with the distributed training of a particular class of recurrent neural networks, namely echo state networks (ESNs). In the centralized case, ESNs have received considerable attention, due to the fact that they can be trained with standard linear regression routines. Based on this observation, in our previous work we have introduced a decentralized algorithm, framed in the distributed optimization field, in order to train an ESN. In this paper, we focus on an additional sparsity property of the output layer of ESNs, allowing for very efficient implementations of the resulting networks. In order to evaluate the proposed algorithm, we test it on two well-known prediction benchmarks, namely the Mackey-Glass chaotic time series and the 10th order nonlinear auto regressive moving average (NARMA) system.

Citation

Scardapane, S., Panella, M., Comminiello, D., Hussain, A., & Uncini, A. (2016). Distributed Reservoir Computing with Sparse Readouts [Research Frontier]. IEEE Computational Intelligence Magazine, 11(4), 59-70. https://doi.org/10.1109/MCI.2016.2601759

Journal Article Type Article
Online Publication Date Oct 10, 2016
Publication Date 2016-11
Deposit Date Oct 4, 2019
Journal IEEE Computational Intelligence Magazine
Print ISSN 1556-603X
Electronic ISSN 1556-6048
Publisher Institute of Electrical and Electronics Engineers (IEEE)
Peer Reviewed Peer Reviewed
Volume 11
Issue 4
Pages 59-70
DOI https://doi.org/10.1109/MCI.2016.2601759
Keywords Regression analysis, Training data, Algorithm design and analysis, Machine learning algorithms, Linear regression, Optimization, Recurrent neural networks
Public URL http://researchrepository.napier.ac.uk/Output/1792689