Abstract

Liquid State Machines (LSMs) exploit the power of recurrent spiking neural networks (SNNs) without training the SNN. Instead, a reservoir, or liquid, is randomly created which acts as a filter for a readout function. We develop three methods for iteratively refining a randomly generated liquid to create a more effective one. First, we apply Hebbian learning to LSMs by building the liquid with spike-time dependant plasticity (STDP) synapses. Second, we create an eligibility based reinforcement learning algorithm for synaptic development. Third, we apply principles of Hebbian learning and reinforcement learning to create a new algorithm called separation driven synaptic modification (SDSM). These three methods are compared across four artificial pattern recognition problems, generating only fifty liquids for each problem. Each of these algorithms shows overall improvements to LSMs with SDSM demonstrating the greatest improvement. SDSM is also shown to generalize well and outperforms traditional LSMs when presented with speech data obtained from the TIMIT dataset.

Degree

MS

College and Department

Physical and Mathematical Sciences; Computer Science

Rights

http://lib.byu.edu/about/copyright/

Date Submitted

2008-03-18

Document Type

Thesis

Handle

http://hdl.lib.byu.edu/1877/etd2316

Keywords

computer, liquid state machine, Hebbian learning, reinforcement learning, neural network, spiking neural network, machine learning

Language

English

Share

COinS