Keywords
learning transfer, liquid state machine, neural network
Abstract
We use a type of reservoir computing called the liquid state machine (LSM) to explore learning transfer. The Liquid State Machine (LSM) is a neural network model that uses a reservoir of recurrent spiking neurons as a filter for a readout function. We develop a method of training the reservoir, or liquid, that is not driven by residual error. Instead, the liquid is evaluated based on its ability to separate different classes of input into different spatial patterns of neural activity. Using this method, we train liquids on two qualitatively different types of artificial problems. Resulting liquids are shown to substantially improve performance on either problem regardless of which problem was used to train the liquid, thus demonstrating a significant level of learning transfer.
Original Publication Citation
David Norton and Dan Ventura, "Improving the Separability of a Reservoir Facilitates Learning Transfer", Proceedings of the International Joint Conference on Neural Networks, pp.2288-2293, 29 (this first appeared in Transfer Learning for Complex Tasks: Papers from the AAAI Workshop, 28).
BYU ScholarsArchive Citation
Norton, David and Ventura, Dan A., "Improving the Separability of a Reservoir Facilitates Learning Transfer" (2009). Faculty Publications. 866.
https://scholarsarchive.byu.edu/facpub/866
Document Type
Peer-Reviewed Article
Publication Date
2009-06-19
Permanent URL
http://hdl.lib.byu.edu/1877/2523
Publisher
IEEE
Language
English
College
Physical and Mathematical Sciences
Department
Computer Science
Copyright Status
© 2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Copyright Use Information
http://lib.byu.edu/about/copyright/