Abstract

Reservoir computing and "Next-generation reservoir computing" built on nonlinear vector autoregression are algorithms for predicting time series given past observations. Both algorithms have been successful in predicting some chaotic dynamical systems, but this success is not theoretically well understood. This paper considers the conditioning of both algorithms and the nature of the solutions found. With either algorithm, the least squares problem which is solved to give parameters as a function of the time series data is ill-conditioned, leading to parameter non-identifiability. Both algorithms fall into the class of "sloppy models." Standard practice to regularize either algorithm includes a bias towards parameters of minimal Euclidean norm. We illustrate, starting from the simpler case of linear autoregression, how the solutions thus found differ from "true" parameters, and why this may result in improved forecasting performance.

Degree

MS

College and Department

Computational, Mathematical, and Physical Sciences; Mathematics

Rights

https://lib.byu.edu/about/copyright/

Date Submitted

2025-04-24

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd13663

Keywords

machine learning, recurrent neural networks, autoregression, difference equations

Language

english

Share

COinS