Reservoir computing offers a fast, low-power route to modelling rich temporal signals, yet the field still lacks principled guidance on how depth, neuron model, and architectural homogeneity influence performance. We therefore explored a family of bio-inspired, sequential reservoir chains that progressively combine echo-state networks (ESNs) with different types of recurrent units. These recurrent units consisted of recently developed neuron models—the Calcitron and Expressive Leaky Memory Neuron (ELM)—along with standard recurrent neurons.
ESNs containing these differing recurrent units were evaluated with different deep architectures against different time-series data types. We investigated a 1-layer sole reservoir, a 5-layer chain, and a 10-layer chain—each with homogeneous reservoirs—and evaluated different connection methods to examine their impact in creating DeepESNs.
Each reservoir comprised equal numbers of neurons fed by a standardised sparse connectome. All models were benchmarked on memory–capacity curves, dynamic-range analyses, chaotic time-series prediction (Mackey–Glass, Lorenz) and noisy-signal reconstruction on real and synthetic, biologically relevant signal types, such as chips, random walks, and time–frequency spectra, using identical data pipelines and linear read-outs. The dynamics of the ESNs were evaluated through calculations of their dynamical properties and numerical comparisons of their lower-dimensional manifolds. Interestingly, there was no clear overall best model when only changing neuron type, with each model showing superior performance on some data types but failing on others. Additionally, with regard to ELM and Calcitron models, light parameter tuning was shown to be beneficial to improve performance, given the parameter-rich equations that underlie these neuron models.
In conclusion, we present two new architectures for echo-state networks that use different neuron models for their recurrent units and evaluate their dynamics and performance on a battery of time-series tasks, showing that these models are able to achieve superior performance in a selection of these tasks when tuned correctly and when weights are not left static.